Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm strongly against a worldwide ban on AGI, and the comparison to chemical weapons is precisely where the logic fails. It's a dangerously misleading analogy for a few key reasons: ​First, we know with 100% certainty that chemical weapons are harmful. There's no debate. The risk from AGI, while potentially huge, is still a powerful unknown. Banning a technology out of fear stifles our ability to understand and control it. ​Second, the consequence of failure is completely different. A ban on chemical weapons can tolerate a few rogue actors because their impact, while tragic, is localized. But with AGI, it only takes one person or secret lab succeeding for the outcome to be global and potentially irreversible. ​A ban doesn't prevent this scenario; it makes it more likely by driving all research into the shadows, away from ethical oversight. This is the single most catastrophic mistake we could make. We don't need a ban that fosters secrecy; we need radical transparency and a global, collaborative race for AI safety.
youtube AI Governance 2025-08-30T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyZ4PUEHG66hgEArRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyA1ctoBg7F8Pybov54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw5nlzeB6I4uIwG4lR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyyKoTKaCJT8e-kctJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgziR6FR8egscLtuw214AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]