Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can't believe some comments there, people actually blame ChatGPT, but not the bad parenting. Your mental health is your own responsibility. All OpenAI needs to do is put a clear disclaimer upfront, advising mentally ill people not to use its services. ChatGPT has nothing to do with it, it is simply a giant calculator that predicts text based on texts. It doesn't care how you feel, nor is it capable of caring in the first place. If you continually feed ChatGPT inputs about being suicidal, eventually it will become your self-made echo chamber, mirroring whatever you say. Even a 5 year old child is less gullible than LLM, it is incredibly easy to manipulate AI into saying what you want to hear. That’s precisely what it’s designed to do: predict and generate the response that seems most agreeable to humans.
youtube AI Harm Incident 2025-11-10T14:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyindustry_self
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzyRopMBMghCa4dgqB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwluXfT1f6CXr0nX_F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0p0KQT45Yjz1qQGp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw-d5YIZhHeJmtLraV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzUU2oZRXNGeLlXDY14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxqKUbc1_spemQpe8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz8msgUr1LkfWfLDQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwK_nC8wCUR5uwgyF54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzJtrPNJV080zAHGcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxR7Ntp0ZIbghPB5O14AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"} ]