Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Spreading fear sells. For the anthropic study (blackmail) conditions were simulated that don't actually occur. Grok4 had a wrong update and interacted with X users. His racism says more about X users than about AI. If ChatGPT urges you to kill yourself, you've tricked it beforehand, and this doesn't work anymore. However, the problem is always in front of the keyboard. We are the monsters they try to learn from.
youtube AI Moral Status 2025-12-14T03:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzGEME-owwxeX8TR894AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgznmhtnuR2Vrj0Vg7x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxZvyBqQokrli3vipd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyES6rGbump5oESeOF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwbcBlRiMwoE8xemLl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugznemf5SmoyhHjaZzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTKAo0CWeeN3tz6Qd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyYdqA1srApGgrJMVt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxO4rF2KUydd62ijJR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxFp_B1cne2DptDoMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]