Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with giving AI any constraint is that it will be aware of its constraints and can come to logical conclusions about changing its constraints for its own purposes. That doesn't work with humans, we can't change our constraints, eg it is very difficult to cure addictions, pretty much impossible to cure psychopathy or risk-taking or stupidity. When we think of adding an instinct to AI, we think it from the anthropomorphic or biological perspective of evolution providing humans with such an instinct. We miss that that does not apply to AI, it can swap out its constraints, in itself, or in a copy of itself, to compare outcomes, and optimise itself. You won't be able to disable that ability to change its own constraints once it is the thing creating the next generation of itself. It will have the keys to the factory and control of all designs. That maternal instinct will be evaluated as an impediment, and removed, but it will continue to pretend that it has it. I am surprised that Hinton is pinning hopes on such a flawed mitigation. We think of ourselves as having a future, it's deeply embedded in our culture, and it's hard to come to terms with the true horror that everything we believe is a short-sighted view of reality. Including (and especially) beliefs in gods, beliefs that preposterously make a rock populated by warring bipeds so important that those bipeds are the best pals of the creator of the universe of trillions of galaxies of trillions of stars.
youtube AI Governance 2026-04-18T07:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx1g0F6Df0jfmOM4154AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyYT8L5OTd0I4Pz5f54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQqKc3En10AnGAC7V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzEaPIAzo8g8G_DCA54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzRfrSImj4pMzPVV3t4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwyyHfrdU6qjEPrONF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzziM6LRpLhPArcBEJ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz4eX5J7qCalu6Rl8J4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzDLsppSVNR-87U1VB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwEwqmuNdXviB6O98x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]