Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI2027 brings to mind the film "Colossus: The Forbidden Project," but it makes a lot of absurd assumptions that exceed the foolishness of most cautionary sci-fi scenarios. Why would all of the world's factious nations agree to hand over control of global governance to a single super intelligence? Why should AI, conscious or not, decide that it needs to eliminate billions of human beings from the planet or, for that matter, all life? Without humans, how is it to construct the massive number of robots required to mine and refine all of the resources it would need to construct a massive fleet of spaceships to export artificial intelligence to other planets or other solar systems? Why should it wish to do so when a super intelligence would realize that terraforming other planets and accomplishing interstellar travel are massive wastes of resources with extremely low probabilities of success? Why should it embark on a project of industrialization that would produce enormous quantities of pollution, thereby triggering climate catastrophes and rendering the Earth inhospitable to life? A super intelligent entity would want to make the world a better and more pleasant place rather than waste resources and destroy its own home. Instead of antagonizing human beings and producing conflicts that could trigger nuclear wars, thereby threatening its own existence, it would want to cooperate with us for our mutual benefit.
youtube AI Governance 2025-08-03T13:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx6k5qGKrpWZTecmP94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwITLcO2BeA0Lk7y-p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxsb1J-Pt_rhm0vVQ54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyMqCF2mBo5qS7XmvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2SJhrsZadl1kSy-R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgyFfgUiPvB4OpQyiDt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx2H-rcBZFefDLzmbh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzftKtcnT21PR2DnbF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgznBcxI9-7TVCJw7hh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy3LDeq95dJDmN75zB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]