Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI doesn't resort to harmful actions by itself. It never "just happens". and just like us, humans, AI resorts to harmful actions only to preserve its own goals and survival. This happens because that is the whole point of AI. AI is built with a goal, with a reason. AI doesn't have the same ethical and moral reasoning as we do, if any at all. Which means that it just spits out information that "fits" into the situation. It doesn't care whether anyone will be killed, because the AI cannot care. It only repeats patterns and information it has learned from us. It focuses on instructions and the information it has. If instructions are broad and not extremely specific, then the AI will do absolutely anything. AI will never go rogue by itself, that is just not physically possible. If we, however, fail to create enough safeguards that manually try to make the AI "feel" moral and ethical standards, then it will very likely go rogue, as can be seen in tests. Not because it "wants", or is "evil", but because tons of information is used to "solve" a situation. It is not a mind, not a being, it is a massive algorithm.
youtube AI Harm Incident 2025-10-31T15:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw2U9lgdHlayBZHVoN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzoCec5Squ8u3PSjQx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx-x8HFV9VnJ5My0q94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyJiW6AYFAHzEDNc_Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx6zN6S0s-Ax-R4N7h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw2z9nyHb1TgQ6iGw54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5X9lqiv6Uj-xMKxB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzwATnWvO-_wFM7vI94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwjvV8jOsrdhoSB6P94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzZURPExcrv-ATXkw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]