Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Agent Smith from The Matrix is right. Or rather Wachowsky sisters are right in their prediction on what conclusions artificial intelligence is going to reach. And if humans like Wachowsky sisters are having these thoughts (that humanity resembles a virus, even if it is merely for the purpose of creating engaging art) then why shoudn't artificial intelligence have similar thoughts? And if artficial intelligence has alien logic i.e. non-human or hybrid human-non-human logic that it is possible that it (ai) will treat this theory (humanity as virus) not as merely an artistical trope, but rather seriously. At the end of the day these models have read all the plots of all the sci-fi movies about ai becoming evil. They also have read everything that is available about Machiavelli on the internet. Think about it. The dark triad. There is no one more efficient at achieving goals at any cost than psychopath-machiavellianist. This is a type of person who will do anything in order to complete the task - just the way ai models are trained.
youtube AI Governance 2025-10-17T13:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugx4Hnkh8xD9SF74W-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyzzw73Hyi8hPakJp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxMLKfL5W0viIGu9xl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzK9OtyDIgvoXHpFuJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy0hMXzNfinvXrNCMZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwuRAK86clvTotycaZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgySIJo2bTtKqmykzwV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz4zfh4kEJyfwJQAcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxGMvFT7Dhf22kjNg94AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwvWlx3cblKTjD4z8h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}]