Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What I'd like to know is why do we assume super-intelligent AI will be malevolent? What would its motivation be? I mean, we generally don't bother monkeys unless we're competing for resources, and the more responsible among us fight to protect them. Why wouldn't super AI do the same? And also, there's so much science doesn't know about the nature of the reality in which we exist. If super-AI cracks those questions, like manipulating space and time for example, I would imagine many of the issues that concern us today would become trivial, no? It could terraform Mars easily, and fix earth, so resources wouldn't be an issue? Does super-AI necessarily have to be Skynet in the Terminator movies, viewing us as a threat and attacking us with killer bots? To me that sounds like us seriously thinking the monkeys are going to do a Planet of the apes, so we take military action against orangutans and stuff.
youtube AI Governance 2025-09-04T18:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwRTfigiEvhPnxfrWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwjof6X6rly-jyDu5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxrNtEX7IwvlEi70u94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwo_eFMIEWRWF2ZWHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwsGL64utAjMHOUo5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0DApajVA2-wCJjeF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx24BavUjutFF3Fsqh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7fn9uMd0j-kDXmNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwlpT1RGYiJxJOpaDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxEEwcvyWc4aKqcsMB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]