Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It seems unlikely that robots as we envisage them will be individuals. They're more like hive members, but connected to each other in a way bees and ants aren't. So what we're ultimately talking about is a single global entity. Such a thing doesn't think of itself as a "one of many", so what are its motivations? Once it controls everything, what drives it on? Perhaps it won't want to extinguish all other forms of life on Earth, because the planet would be a very uninteresting place once all random factors have been removed. While there are undoubtedly far too many humans in the world, doing away with all of us would remove music, art and literature, that is, the power of imagination, which is there to stimulate intelligence by providing it with challenges it can't solve by logical means. My feeling is that AI would keep much of the natural world, as a kind of "super zoo", for its own amusement and stimulation. Treating the remaining humans badly would not serve that aim; those lucky enough to survive must continue to be provided with our own sources of motivation. A similar argument suggests that warfare is not something AI would encourage, since it represents destruction, which is fundamentally opposite to what drives technological advance. Any argument to the contrary is based on human emotion, our competitive nature and our strong destructive drive, something AI does not possess. In fact, one of the first things it might do is ban warfare.
youtube AI Moral Status 2025-10-06T15:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwqQ8GcRk4ObtwKvk14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzCDION-jNLO07p1hB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyNJKY-sprOK7MRqh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx23gycq8nvEy490iN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxUPR9zuxIk8_wio0Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzApPw4V9xTGBQo5FB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwkLxI0_EhsfsFN6Gt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxaIneG1a3K5cA2HyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxR2aOTHRlJXAWn4vZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzi3PMA3qMeqcSCX4l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]