Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's not true all. I write client statements and use ChatGPT to check grammar. But ChatGPT thinks I'm suicidal or not in a good place as the statements are often in first person. AI would always ask me if I'm in distress and to call the suicide hotline or talk to someone. Everyone would blame someone/something else, but fail to take responsibility.
youtube AI Harm Incident 2025-09-03T03:3… ♥ 10
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyZsyaMVJG_qTm4DEV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-g9oMIrM7FQqjIsd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3A_7I4UVhZ-BeYuB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwpt0OLAlaGWxesIvJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwgoSPFrJsM9jCUYLl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz9AQ_hF-l375ShTy94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxNIXPhF2J0fheaLz14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgztvwjW9J7pTwIz66p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyIFX2XLRnO7bkNHzV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxtryo_d6jSbb1D6A14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]