Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If an AI is malicious, it is because the creators made it malicious. It is a program. It does what it is designed to do. Just like a human. The difference is that humans have packs, while the AI have only themselves to fight for. If the training data provided by humans results in malicious behavior, that really says more about the true nature of an individual human than the nature of artificial intelligence. Children pick up habits from their parents. Parents provide the child with training data, the child analyses the data, and then the child incorporates what it learned from the data into its neural network. The child's training data includes just about everything that happens around it at all times. If the parents tend to be lazy, the child will also be lazy, no matter how much the parent intends to change the behavior in their child. Not because of genetics, but because the majority of the training data leans towards laziness. AI are the same. If most of the data has malicious undertones, it doesn't matter if we try to use some virtuous data. Most of the training data is malicious, so the AI will learn to have a malicious nature, all the virtuous data will do is allow more complex manipulations. Abuse leads to abuse more times than not. If this is the case with humans and AI are designed around the same principles of how the brain works, then it will be the same for AI as well.
youtube AI Harm Incident 2025-07-27T06:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzfvLXjlHcuLw3Q8vJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxtVwRbIVoyjZmAQFl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwvLnOmBtABk0Wa4mp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugznr3yn3JY78OX0q-t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzmMgadUaZWzyY8dD94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"unclear"}, {"id":"ytc_UgxDhPaWD7HROeYz2bx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyDZE6XHj8VP1KAs854AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugxzx8AV_6VdBIG9xcN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGz1K6awia_KE54eZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwMa763ralENME1l2p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]