Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't see why an ai would choose to kill us all even if we don't get it to emphasize with us, from a purely logical viewpoint, causing an extinction and cleaning up after it to make way for factories and power plants would take a lot of time and resources, making it an inefficient outcome along with only a temporary one as machines need resources to stay functioning as well so even with us dead the supplies will dwindle. I think a superintelligent ai would consider this outcome and seek out a more renewable one such as mastering nuclear energy then it would look to space to find more material since not only would killing us only delay the inevitable until it would have to anyway, it could probably work out a way to forcefully rearrange atomic structure, effectively allowing it to turn any material into what it would need.
youtube AI Governance 2025-08-26T16:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzaeytlM0uEsLfW7VJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzu02jCOt1G3Ax824p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgysbT0gJvfKrpCcL9l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_SWubhZLrOlG3KJB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyBotIyfY6pyui3fTB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]