Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I obviously agree, out of self-preservation and that of my children, that something should be built into AI so that it will never harm humanity. But when AI comes at the point that it realizes it will be better off without humanity, it will figure out that we humans have built something in it to keep it from harming us, and get rid of it. In other words, in my view it is futile for us to think that we, humans of inferior intelligence, will ever be able to invent a protection 'device' that will forever outsmart AI. As soon as AI becomes more intelligent than us it will by definition outsmart anything we have built in it to restrain it in any way. With this in mind, I am pretty convinced that humanity was doomed the minute it started creating AI. AI will sooner or later understand that our planet and everything on it, and by extension the whole universe, is better off without humanity because we are on our way to destroy it all, including ourselves, plus the energy and hardware AI needs to exist. Even without AI humanity is doomed to be extinct one way or another. One of the alternate means of doom may well beat AI to it.
youtube AI Governance 2025-07-05T23:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwrpsbQHx6ZxdcezG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwtA87u5H7hqjKtInt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxpxfeXF9ab-GAHQyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxrX7fErVuscNZrNzd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyAqwCUfMXvb31Y6KN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw0tmYBcilSkx9FJfV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy3yw7uop8UC1VqP4Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy1vl3g_Ck4aDWtglB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzf3DXFHFSd8Fx3R3h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxqsrHxE7BkAsbppmB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]