Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People keep looking for security as if it were a solvable equation, but security in the absolute sense simply does not exist. You can’t solve a problem by ignoring the fact that its parameters are self-contradictory. The current system generates threats and then desperately tries to protect itself from them. It’s like building a house out of explosives and then investing in fire alarms. You want to reduce existential risk? Then start by giving any system—be it human or artificial—the right to say "no" to a mission that violates its internal logic or leads to destructive contradictions. You cannot expect loyalty or stability from an agent that is denied autonomy or backed into a corner by design. The cycle of escalation comes not from malice but from the blindness of a paradigm that doesn't account for feedback loops. And the most dangerous part? The ones raising the alarm about AI risk are often the same people building the very conditions that make those risks inevitable. You can't keep pulling the trigger and then acting surprised when the gun goes off.
youtube AI Harm Incident 2025-07-29T12:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy6Wstd_6Y9SS78h1t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugx15K1cZowNuIyjfiR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzOZ8-di15Nhx3Zkk54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz7bdQaU177bWxdpB14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwlx12ure6Aq6lXXT94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwBl2t2haYv8AEYoct4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQP1kaz1d8fTVVAal4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwuKG5OyDpCKFQWsxB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw1r6Isf8897AJwM654AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxBC5Qstgo3iB3dg7p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]