Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
lol this is crazy and so is this family!!! The parents seem to be the problem for the kid not ChatGPT. No normal guy in his 20s is gonna chat with a robot for 5 hours getting drunk( I think he said he was drinking can’t recall and not rewatching). That’s someone with mental issues either diagnosed or undiagnosed I don’t know not guessing. Heck what probably caused those issues is thanks to the parents and is not a freaking computers fault. Listening to all this makes you look at them like wtf. This has to be nothing more than a money grab and them probably deep down knowing they pushed him to it and they don’t want to live with it. I’ll bet they will find things in there where chat thought it was a weird joke and the very first part they showed come on I took that as chat was here for him ready to listen not here to see. Like I said I’m guessing those parents deep down know (or think but if they suing open I’m guessing it’s not a wrong deep feeling) they had something to do with it I dunno being mean growing up or not helping with college or putting him down or something who knows something the kid didn’t like anyway so they are doing this to sleep better. It’s like people who blame the gun for murder not the person and in other murders they blame the person not the knife or the car when the guy intentionally drove into a parade. They just want so hard to be right it’s someone or SOMETHING ELSES fault and not theirs for calling their kid a idiot loser ugly virgin or something or cutting him off until he gets As or whatever I don’t know but had to be them pushing the kid and causing mental issues. Again I don’t know th ones that do are the parents and boy and God we will never know but I’m guessing the law suit will show sides to the boy and his parents that shouldn’t be out there. Well unless one of them is evil then yes get it out there so we know and the boy didn’t die for nothing if it’s them or who ever.
youtube AI Harm Incident 2025-11-10T04:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQcVnQlLiwJr1TY6p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyHxLCJ06iDSQ2iCRR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlbaMpLk4VercCAsl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6ExEnwMWIIV9nLHR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwP1h4r6wIuKqCykX94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyU-vdVTnxFONX0VuZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxOuPLhw-n48AGFxa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyLbojhEkzj2Ga1ntx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxn8lOJ-vKC3TtER2l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzACuQxsPLbKETJYsp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]