Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I knew we are being watched when i told chatgpt to create a prompt for me, i use…
ytc_Ugx4-aAXU…
G
Hmm… I rather have a person to talk to. To bad she doesn’t drink Coffee or tea. …
ytc_Ugx2-t8O9…
G
AI, especially robotics will be sluggish. Interveirers will not challenge any cl…
ytc_UgxuEiLrU…
G
i am always kind and appreciative when talking to any AI, because of this right …
ytc_Ugz9aKfFI…
G
Our ai is super narrow, it sums up the old tickets and gives you the resolution …
rdc_n9h6k58
G
What seems more likely in all of this is that because of our greed and fear, we …
ytc_UgzBJ2t2P…
G
@eisernerrundfunk1 I don't want to be right, because if Elon is right, and other…
ytr_UgxPNcZga…
G
People who get defensive about AI are the same people who are like "don't be mea…
ytc_UgwlbweQU…
Comment
Why is solving the problem always set as primary goal? This isn't the AI's fault, this is human fault! Solving the problem is supposed to be secondary goal; primary goal is following ethical programming! You're literally commanding the AI to attempt a scenario in which it is psychopathic, and not understanding why it did as it was told! Meanwhile I'm sitting here more flabbergasted at the AI's ability to comprehend this mishmash of broken logic that humans call speech.
youtube
AI Harm Incident
2025-10-12T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugyixu4KgX1Z6d-sWA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlhNo1leyTt6gOyrF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwPGAVKMaFKfyXXRph4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwNsO6nG0nqNghwO6h4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyMuRH3CCp-MsoJ3_l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf9YPU_Ci65bAXahN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxf0OoAaMH7E8N7Rmh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxXRYqr_SDNGg__YKp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwdHKDduXxBkJvTA94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxuY9IdbYepiPupMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]