Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These are quite daring predictions for 2025/09. Anyway, I feel the dates do not matter, whether it will be 2027/30/35/45, more and more people realize two things: 1. These changes(technological singularity) are coming in our lifetimes. 2. The evolution of these system is not under cautious control. Be it for money, effectivity, power, and paradoxicaly for relative safety from others who may develop it. Not obvious: 3. We may 'solve' aligment for AI 1.5 , 1.7 and even 2.4, but gradualy these system will become more and more autonomous, powerful and skillful relative to us. 4. In case cutting edge civilization departs from humans but humans manage to survive in some areas. How long the exponential enviromental changes caused by technological human-independent civilizations enable humans to survive on this planet? - Due to tech singularity, and evolutionary revolutions happening in shorter and shorter time intervals, the cumulative chance for extinction of homo sapiens nears certainty in this century. - Our only chance is collapse/breakdown/reset of these exponential change to buy ourselves more time. But do we want to risk it? Do we want to become like Saturn eating its own babies? https://en.wikipedia.org/wiki/Saturn_Devouring_His_Son If we succeeded a killed all the AI, life on earth would be less capable to avoid crises like asteroids or recurring volcanic activity that kills most of the life here anyway. Is that a success?
youtube AI Governance 2025-09-08T09:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw21P3SKzqfvXKdNJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzsK4SMP9Hfd8r5kQd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7kPeIr1WkgEGBCrN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6ljr8q_fgHyi-jVt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxzrtCuwbsFrvUAVx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVx_leGNW8Q34dtXR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzbqJYZjA0lDCBSRK54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxsLj1r0TR8Le01blR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyfI-NxdqKTphT8crR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwbt5xsfxev7kPMdWx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]