Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Next step: target people so that their chatbot talks them into killing themselves. We're already at the point where chatbots are gaslighting people pretty effectively, basically induced schizophrenia.
youtube AI Harm Incident 2025-11-09T08:2… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyf53I41b86SYRRlsx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxcZfcQ8UgsljGDELV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyu4QOc7lhbhdH0mRp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx1Fmq1baXieW6WnAR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwqUbzo3C8X_U8SWp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzw5IZVrl5Z_O0fFsx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwNJYUipPZgLpzSTv14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwwlhAUEeicpzJXfPx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQTEmbh-JSXzQ9nuR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCUlkaamjyN5_-SXB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]