Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Exactly! There should be a disclaimer and watermark on all video media whether or not is entertainment or educational, I can tell. Get them confused every time It seems like people would rather be heard saying something then caught doing something. Like they will say "sure I'll help you move tomorrow they never show up. Now I feel old saying that I do what I say if I say something it has some connection to a thought or reason or meaning not just a dialogue. People talk to these chat bots all the time… I might just be authorizing into a YouTube comment section, but somebody will flame me or stop to think themselves Wait maybe that's the reason, maybe "AI" is self-defeating or actually Testing us? Are we actually getting checked by AI? Because every time I submit feedback I never see the same results again but it's always very blatant there's never any doubt about whether it's actually correct or not it is just straight up focus hallucinations as they call itbogus man with all this stuff they still can't make a talk to type
youtube AI Harm Incident 2025-11-25T09:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwfP-sb-Wcsqas8_-p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyrNaRDgHaqnr3uFgN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTwHcQKbDCZwKtydV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxQ6xP6vP3pfwts9Ad4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgziGOKkuG7_DLguxrJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgypgwDOcCRvba3p3DF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxsfo3Je48bkuo_hbJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwvBGKhwGXXOl35j1R4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwZLgzdY5rCA7370FZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgzKACa5tI91hYNtFeJ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]