Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you think we are currently living in a simulation run by higher intelligence, what makes you think that a superintelligence created by us would kill everyone? If we believe a higher intelligence exists already, it hasn't killed us so far but rather created us. Similarly, if we look at how we have behaved towards animals (as a lower form of intelligence) - sure, there has been a lot of killing, but I would say there is a tendency towards wanting to live in harmony with nature and that is part of a learning process. Superintelligence would learn much quicker, so it would probably not depend on killing for domination. And who even says domination would be a desirable goal for a superintelligence? It might be naive, but I have the feeling the assumption that AI will kill everyone in the future is very much predicated on human behaviour and rather limited, human ways of thinking...
youtube AI Governance 2025-10-09T08:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyRNzClKEhbB1pK99B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzF2Y3ugnbV8EFz41R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyaTULpdOuR6e9_f3x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwEQBJQZ5o-epBJB1F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxjP7A_QFfiF-iNtx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzR2l4OZ_-BgZ22In54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyNSh4XfCiKAx1pfox4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxsvIjn7nEemSsClD14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwaNfLwqVJ54vAeSZt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxfAlFsbmd6xiVEz3d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"} ]