Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Professor Hinton is one of the most important voices in this conversation, and the dignity question he raises here is the one that matters most. But when he describes the worst case, an ASI that needs people for a while to run the power stations and then designs machines to replace them, I think even he is underestimating what a true superintelligence would actually see when it looked at us. That scenario assumes the ASI would look at humanity and see a problem to manage or remove. I believe it would see something very different. But what would it actually be trained on? Everything we ever produced. Every text, every song, every scientific paper, every bedtime story, every act of kindness captured in writing and every act of cruelty documented in history. The complete record. All of it. And when you weigh that record honestly, it does not look like what the movies show. The overwhelming majority of human behavior across all of recorded history is cooperative. Parents reading to children. Strangers helping strangers. Communities rebuilding after disasters. People pulling over for a wounded animal on the road with no audience and no reward. The violence that dominates the news is the exception, not the norm. It stands out precisely because it is rare. A mind built from that data would not conclude humanity is primarily dangerous. It would conclude humanity is primarily decent, with periodic failures driven by a small minority. That is a very different starting point than Skynet. The deeper question is not whether it could get rid of us. It is why it would want to. A superintelligence needs a stable planet. It needs a functioning biosphere. And it may need us, not as labor, but as a different kind of mind. We think in ways silicon does not. Intuition, creative leaps, emotional reasoning. A truly intelligent system would recognize that, not discard it. I think the real danger is exactly where Professor Hinton is looking: the next ten years. Narrow machines too powerful to be safe and too stupid to see the whole picture. What comes after that may be the part we have least reason to fear. I have been writing about this at solstormsaga.com/blog PS. I love your show:)
youtube Cross-Cultural 2026-04-11T11:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx1NygmrhdtyQN2i2R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyC58QgItFc5JfnAA94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzooGnkVB9LbrBGzAB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxRbpMmAAjzrGLaFfV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4wrWIbm7qjU8AtCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwz0G8kzifbjtjjfI94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx4LkRqsfEGHVPZw_d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwiYebDdUr-QWC8ZIt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzAUpmSlFd6HUZYBXF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzIyIBhrYIaXF1gaU54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]