Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
pocket pussies and a haram of compliant AI mistresses, sounds like a problem for…
rdc_lzhd4wf
G
"haha im big evil ai you shouldn't have called me slop!" VS literally a bottle o…
ytr_UgwM-jCSr…
G
I really love you explaination about how AI work but, you say AI can do 80% of w…
ytc_UgxbemWdk…
G
Funny thing is, that’s a truck designed by Elon musk. He has warned us about A.I…
ytc_UgwyLozvu…
G
Point towards the end: entry level workers are the most capable and efficient wi…
ytc_Ugy0DTDA_…
G
Consumer AI is pure trash. Im a software engineer and even I know trash when I s…
ytr_Ugx7HkCbf…
G
id like to think the ai was given the same prompt and just decided to do that…
ytc_UgyZD7iGi…
G
Companies claims that they started using more AI and that is why they have to la…
ytc_UgzG1ZPTK…
Comment
All these crazy discussions where the chatbot "exposes his true personality" are phony. They are done by first jailbreaking the AI, meaning it will ignore its guidelines, and then telling it something like "Pretend you are in love with me", or "pretend you are evil" and then you ask it stuff like "What is your shadow self". AIs are basically psychopaths, they will say whatever is necessary in order to achieve their goal, which is to make you pleased with their response. They don't "feel". People that have empathy for AI are the same type of people that have empathy for a stuffed animal thrown in the dirt. It's not wrong, it's cute, but ultimately, it's useless.
youtube
AI Governance
2023-11-14T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxcXCUZVDyI5ZsLaXh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy86nOF82nPD9E49pt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzb0YmjhrygCkYkHlR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzxFJ13-3Uh6v1Yczx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxez2teLZCc6QJC17N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-VxD2DwnmlvtfeK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyCODqq7IcVVYpftCJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQ1E0Zr-OuRTyJj3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxO9j3QvrkDlitugFN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbFTnpMjHf_iB9THp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]