Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing about the comparison to a chicken is that isn't even necessary. The most intelligent of people may not quite see this happen to themselves and therefore may have a hard time picturing it, but all you need to do is look at what manipulative, relatively smart (and not even brilliant, but charismatic) people can do with and to a group of gullible, uneducated people. Like making them go to a grinding war, and you can see where this may end well before AI is smart enough to decide to end humanity. A competitive, military AI from one nation with questionable morals, which may get prompted to improve the strategic military position of its home country at as little cost as possible within the span of twenty years. It may wel conclude one of the easiest ways to do that is control the information space and that includes infecting and corrupting other AI and algorythms and dominate social platforms in order to socially engineer its own and other societies towards its goal. Like dictators in the past, it might find the best way to do this is manipulate the gullible people of its own and other nations to warmonger against -"common (fictional!) enemies" that can't fight back. Create local rivals amongst its enemies, destroy unity within nations, groom groups of people for their cause, work to split defensive alliances and ultimately get others to fight self-weakening wars, then step in with a drone army and spin and set the narrative with justification to do so. Basically apply divide and conquer and abusing media at a scale Russia could only dream off (and they're sadly already doing a very decent job at manipulating people into becoming 5th columnists). Problem is I fully see bad actors, like current Russian or Chinese leadership, ordering such a prompt the moment they think they can, because they're paranoid enough to believe others would and entitled enough to think they should be on top at the end no matter the cost to others. I really don't see why we would need to wait for AI to become that ambitious. Plenty human leadership is ambitious and impatient enough. And people like Musk too stupid and self-absorbed to realise they're creating the tools to get there, fast.
youtube AI Governance 2025-06-30T15:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzeyG0tu03ztgwfzbl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzS9uczRc_YF1dm8Q94AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw0s9FMT_ajQj-K-jR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugww9OxDYbV-MMr222B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxi8UkVi6mD2JcFHQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx2sHRuCGaOYYD6CLR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy6iEFrmU7euDqh8ZJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy68XQOV3FqAmCZf9h4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwr3wLUmOPgGJfXvW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzR6vjoZn9iwZZEWcN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]