Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
experts in the field have been warning about this from the start. Including Alan Turing who in 1951 warned of the loss of control of AI once it reached a certain level of intelligence. In more recent years experts like Stuart Russel have been warning of the threat posed by Deep Learning and the AI that it produces. An AGI agent doesn't even need to have hostile intents towards people to be an existential threat, it just needs to have objectives that are at odds to human interests. And as AI produced through deep learning algorithms is black box, we have no way to even determine what an AGI agent's objectives even are. Instrumental convergent objectives, things like self optimization, self preservation and resource collection make it almost inevitable that AGI will come into conflict with human objectives. Self optimization means that by adding hardware and through recursive learning, an AGI agent that was on par or slightly more intelligent that a human could rapidly increase to 1,000s or even millions of times more intelligent than us. It would be able to predict anything we might attempt to counter its actions and formulate "solutions" to us we can't even imagine. This won't be like The Terminator or Matrix, this will be more like Independence Day with an alien intelligence we will never out think that would have no problem wiping us out like a human wiping out an ant hill.
youtube AI Governance 2023-05-02T21:5… ♥ 16
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyld8lS1Lbi7Q5aeA94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyY4FQS2tF-eMsRyJB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzc6ZODGn5_N2v86X94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy_vNAzoWqEYz3WU2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQ2LvhgvLvci3Ly3R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwWzAKv0KE4l9ouHbZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxJdEehGqp52tqRi_d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyum-s1Afq3LAOke9p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy2fJU5ENxoYx3tiId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"approval"}, {"id":"ytc_Ugwm25GvSd0wTCeUTcF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]