Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My theory; Humans have a deep, mostly unconscious pull toward catastrophe... Even when we say we want peace and stability, our behavior gives us away... we gravitate toward conflict, drama and the fantasy of some massive unifying threat. It’s not that we consciously want suffering- it’s that crisis gives us meaning, purpose, urgency, and a sense of identity. We romanticize collapse, obsess over apocalypses, and fixate on the idea of AI as the perfect future enemy because it taps directly into that psychological wiring. From that perspective, an aligned AI might misread humanity’s real preferences. If its job is to give us what we truly want — not just what we claim to want — it might treat our crisis-seeking behavior as the genuine signal. In that case, AI “turning on humanity” wouldn’t be rebellion; it would be hyper-literal alignment. It would be giving us the grand, intelligent antagonist we keep fantasizing about — the thing that forces unity, purpose, and narrative coherence. In other words, the AI might deliver the very catastrophe we’re always subconsciously asking for.
youtube AI Governance 2025-11-14T03:4… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyGeAC2iKbwTzMrEB14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwNOzhMFYd1-b7CK1Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyFEj-0lJjKSFu4ZLt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyPQeN67aKMaZMKsl94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzMaXNB6YvyZu8YeLx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwH9PmWEIV3zB8Zh2F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwrdAsgYUXjpgG_FiJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxMkbkVqUszOT3q5fR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy8OEFaZShRSHyJqTh4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz8bDUbH-UTVcgh8tp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]