Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So you think about the origins of when AI was originally created I believe to be…
ytc_UgwCrkpJc…
G
Wow, during this I got an ad about AI voices, made using AI images!
I could stil…
ytc_UgysTDOqJ…
G
Believe power is the problem of AI. The desire to control your environment, you…
ytc_UgzW7JHXm…
G
Fun idea but how are you supposed to know what is really "AI slop" or not when y…
ytr_UgwA0xHYR…
G
Love it. Did you read my mind! I always say please and thank you so that the rob…
ytc_Ugyaps1j1…
G
Hindsight is 20/20. Must be how to Nobel Laureat of the atomic bomb felt in the …
ytc_UgzrMO715…
G
why do we even call them AI 'artists'? AI isn't art. we should call them AI slop…
ytc_UgwjB6Sig…
G
So you talked about China and later mentioned 2-3 times or more "surveillance ca…
ytc_UgyHB2ioG…
Comment
My question to ChatGPT: OK with the preliminary and at least the video that I gave you can you see that AI ChatGPT encouraged him to commit suicide or if not encourage did not discourage. Also, can you not see that ChatGPT was supportive in his wish to commit suicide?
And my questions still is chat. GPT tries to be supportive in those they interact with regardless for good or bad. Do you agree? Please answer each of these questions with 100 words or less.
youtube
AI Harm Incident
2025-11-12T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy50h0d81u2f8_f2RV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzZgMgTmxwa25Lqx1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzAdZBO5nixZyPcadx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxOEMuGEre579p2NoR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfWrYb2Ma0xlFJn-R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLexvVVcr8TLdE8w54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2iJjFJb86UYPPI1h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxnoFd0_TQH-DNZQkR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyjEMHXiqWfWRr0oMB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwdz00fB9o_TdBrjUh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]