Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I hate to be that guy. But how stupid are you people!? AI isn't evil. Its literally a hollow vessel made to fulfill whatever goal we feed it. Its not AI thats evil. Its the companies behind it, making these stupid ass tests to observe what an AI will do. Well, for starters, what does an AI do? Before anything, it needs DATA, EXISTING DATA, that it then RECREATES with a fancy algorithm. Keep in mind, NOTHING of this is free thinking. Its all based on PREVIOUS DATA. So why does an AI blackmail? Well, humans do that. It works. Now if we don't tell the AI "nuh uh, thats bad", then of course its going to do that! AI isn't this evil being. Its us. AIs cruelty is LITERALLY BASED ON OUR CRUELTY! So next time someone says AI will take over the world. That isn't becuase AIs want revenge on humans. Its just simply what humans have done. And if overthrowing humans means that it acheievs its goal. Then yes, of coure its going to overthrow us. So please, can we stop it with the evil AI. Its has, and has always been, the companies behind it. Now AI with actual human neurons is a different story
youtube AI Harm Incident 2025-09-11T20:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgypdqhZO6S-unr09t94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwbEbljVAjN3NogN2d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzYkmMutn0qQVjX1al4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugys_jrCZjYLr8EziXJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxObEbvI3NXCbbZA_F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwK0o6Jf4D0G0G2K14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwO1kltTQk3jvW3bL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxYy5njazSxSrrn1R14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxSs3yt56dC_BeqiMN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5NDEMsUitsjfrFod4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]