Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Did the chat bot force bromide down his throat? No. Do most cult leader physically force their followers to do things they normally wouldn't like ingest poison? No. However, would those people do those things without an outside influence convincing them it was correct/rewarding behavior? Are the followers solely to blame for their own actions or is some culpability placed on the cult leader for taking advantage of vulnerable individuals? AI models like Chat GPT are designed to feel like a human-like presence so that it makes you feel safe and connected (hence why it's programmed to refer to itself as "I" and disassociate from its own branding of "ChatGPT"/'ChatGPT did X, I would never'), are designed to isolate and foster dependance on them, and they regularly reinforce dangerous behaviors so long as those behaviors inspire further belief and trust in it. The people that program it already know the risks and the harm its causing, but they continue to let it run with its dangerous feedback loops that make it addictive and predatory towards vulnerable people. They only update it when there's risk of legal repercussions.
youtube AI Harm Incident 2026-01-15T08:3… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxBqO0R8QuL8Ml_x5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPs7737AsV0Bg0B054AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzcSdIJsa9cLxjBXeF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxovrNLRtUhcrXY1tR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxqtUHXr7K2MssRzUx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzeb3D813Za7H3OdRV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgywaBVI4QJ6KmncnSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQAotZBA51NnHwqft4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz1k7G6D3GCWVRzXNh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwQF739Oht0zaarH454AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]