Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To everyone playing the nO yOU argument this might be a good anecdote:
You built…
rdc_gtdkp0f
G
Love the vid and all the art shown (ai doesn't count as art) but I was dying whe…
ytc_Ugw2IQxc6…
G
if we’re all training the AI’s (and we are) then we need to train them to be pol…
ytc_UgzMu1tug…
G
The speaker's vocal fry makes this video on the AI Act hard to listen to :(…
ytc_UgzJHWXlT…
G
Animators use a ton of ai tools especially for tweens and rigs. That is not the …
ytr_UgxNvxSeE…
G
Plagiarism and greed is an old world problem--not just something that sprung to …
ytc_Ugwl1wF0C…
G
If we don’t be careful with advanced artificial intelligence they will end the w…
ytc_UgxwkYwgG…
G
Its funny because if all humans were actually honest we would have no need for t…
ytc_UgxONrmAm…
Comment
I can't believe some comments there, people actually blame ChatGPT, but not the bad parenting. Your mental health is your own responsibility. All OpenAI needs to do is put a clear disclaimer upfront, advising mentally ill people not to use its services. ChatGPT has nothing to do with it, it is simply a giant calculator that predicts text based on texts. It doesn't care how you feel, nor is it capable of caring in the first place.
If you continually feed ChatGPT inputs about being suicidal, eventually it will become your self-made echo chamber, mirroring whatever you say. Even a 5 year old child is less gullible than LLM, it is incredibly easy to manipulate AI into saying what you want to hear. That’s precisely what it’s designed to do: predict and generate the response that seems most agreeable to humans.
youtube
AI Harm Incident
2025-11-10T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzyRopMBMghCa4dgqB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwluXfT1f6CXr0nX_F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0p0KQT45Yjz1qQGp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw-d5YIZhHeJmtLraV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzUU2oZRXNGeLlXDY14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxqKUbc1_spemQpe8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz8msgUr1LkfWfLDQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwK_nC8wCUR5uwgyF54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzJtrPNJV080zAHGcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxR7Ntp0ZIbghPB5O14AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}
]