Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But what would be the driving urge within General AI? In humans, we are one outcome of natural evolution. Our intelligence, such as it is, developed "on top of" our reptile brains, as it were. For us, intelligence developed as an adaptive advantage, which over time more greatly enabled the realization of our animal urges ( what I call the "seven urges" - Feed, Fight, Flight, Family, Friend, Foe, and F#@k). Each of these urges serves to advance the reproduction/survival of our genes in one way or another. We don't spend much time "thinking" about our animal urges - we "feel" them. Our feelings are the raw stuff of the life urge itself. The real question I have about AI is what drives it? We cannot simply assume that some general AI would want what we want. In fact, we can't assume that general AI would want anything at all. For all we know, general AI might be COMPLETELY INDIFFERENT as to whether it survives or not. Furthermore, we still don't know how or why WE experience consciousness/self-awareness at all. Does consciousness/self-awareness require a biological substrate? Can a digital silicon-based neural network be "self-aware"? Can it experience - "feel" - any of the things we feel/experience? WE feel the "urge to survive" because it is an urge selected for by evolutionary selection - organisms that did not want to survive - well, died. More quickly and easily than organisms that felt an urge to survive. Without denying in any way the gravity of this discussion, why should we simply assume that a general AI has an urge to destroy us? We humans are all potential sociopathic killers (most of us, but not all, learn to hold this in abeyance). I propose that this potential comes from the reptile brain. Well - general AI doesn't HAVE a reptile brain. General AI, presumably, doesn't experience anything like our "seven F's" So, as a point to raise here - what do we think general AI would want?
youtube AI Governance 2026-04-07T15:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyEC1LKTuYRZr_0I_V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwuDluXcgFJm22OaVx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyUR5ePj-znxaPbOaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx-WUd01SaIH7yGOW54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzhtWjVzGeK6QLLGqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFZfpfCiZGF2E7v754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz91DQJgWc2HcULoLd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwZZJtneETSlTus_-p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxxtX4sLqB0QYN4DOx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1gaBhNUmJMo2x1Yt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]