Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Okay, fine, let's play Devil's Advocate for a moment; how is this OpenAI's 'fault', exactly? How is ChatGPT actually responsible? Obviously, ChatGPT can't be held liable so they're trying to hold OpenAI liable but what did they do, exactly? They didn't have adequate safeguards in place, right? But, they *do* have safeguards in place -- lots of them -- but they can't defend against literally everything. What safeguard does your car have against a flamethrower? Should car designers be held liable for not adding automatic braking if you exit the driver's seat? Such a contraption would certainly help prevent cars from rolling downhill and becoming a potential danger to others, so should they be forced to start doing that via litigation? What is suing OpenAI intended to do other than get money out of a tragic situation and pretend to punish a company for something that isn't even necessarily their fault? I am decidedly anti-capitalist and even more anti-corporation, yet even I have trouble laying such blame in this case.
youtube AI Harm Incident 2025-11-08T07:3… ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxYnCllpe-1ngRKPRJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy4BCJ-3I5BfcMEY194AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw06TApPIT5BNJSZKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy00m3KuQo0jKTYT3J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwY7Blx1KpBTCTsRX54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyZfemiOS6YIawU00V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugxf5u84wVR8dqkiamx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw9ILIrCJ46Y9JvSzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx1SpV47j49VjGCRRF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxjIK3FMh1Rd-oiMvl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"} ]