Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is the AI version of the Satanic Panic. AI does not harm humanity on its own. The only realistic way it causes harm is when humans use it ignorantly or incompetently, or when they abdicate responsibility to it. Garbage questions produce garbage answers. That is not an AI failure. That is a human failure. If a political leader uses AI to determine tariff policy without understanding economics, models, or constraints, the resulting damage is on the decision maker, not the tool. Blaming AI for that is like blaming automobiles for reckless driving. It is actually worse, because in this case there are no confirmed examples of AI independently causing physical harm to anyone. I keep seeing sensational claims that AI told people to kill themselves, supported by nothing more than screenshots. That is not evidence. It is trivially easy to fabricate a screen full of text and attribute it to an AI. Lines like “hide the noose, don’t tell your mother” sound horrifying, but where is the full transcript. Where is the verifiable context. Where is any independent confirmation. Extraordinary claims require extraordinary evidence, and so far none has been produced. Until an AI has a self, agency, physical autonomy, and the ability to act in the world without humans, the idea that it is an existential threat is speculative at best and dishonest at worst. We have seen this movie before. Dungeons and Dragons did not summon demons. Computers did not destroy civilization. AI is not a murderous entity waiting to turn on us. If you want to argue that humans misusing tools is dangerous, I agree. If you want to argue that AI itself is malevolent, you are going to need far more than fear narratives and screenshots.
youtube AI Moral Status 2025-12-14T23:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzO__TnTbFKzCbjcWp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYCMs6ilXIl-a0i7l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz73pONk0cOO3dOnQB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz59OaBvjQV3KMgBdJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwhfxKXWO-rz3fbd9Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzry_6YxUK4EjcRV054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyuxjF8Bwkri7hoAp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0lr2quH_IpNJq9NJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzKjE-VLVh9_XYvUNt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxXAiqmTp2Viz3tFcZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]