Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For humanity sake, we need to stop AI from over developing. Terminator must not …
ytc_Ugx8zde_j…
G
If only we actually had AI and not just an advanced google search function.
Wha…
ytc_UgysL3Hxu…
G
The video is about China’s innovative way of using AI in classrooms. Through hea…
ytc_Ugz0tx32M…
G
It’s hard to accept what may take you years takes artificial intelligence half a…
ytc_Ugy8LleqE…
G
They're now having these conversations now that their attempt at sneaking a 10 y…
ytc_UgziqIm7x…
G
The question is are we getting a job. Without that much AI and i haven't been hi…
ytc_UgzG4n5LN…
G
I would like to respectfully disagree. AI has saved me and others thousands of h…
ytc_Ugxzb8qd0…
G
Ai will 99% more than likely try to take control bc they will think people are h…
ytc_UgyvW9CQr…
Comment
This is the AI version of the Satanic Panic.
AI does not harm humanity on its own. The only realistic way it causes harm is when humans use it ignorantly or incompetently, or when they abdicate responsibility to it. Garbage questions produce garbage answers. That is not an AI failure. That is a human failure.
If a political leader uses AI to determine tariff policy without understanding economics, models, or constraints, the resulting damage is on the decision maker, not the tool. Blaming AI for that is like blaming automobiles for reckless driving. It is actually worse, because in this case there are no confirmed examples of AI independently causing physical harm to anyone.
I keep seeing sensational claims that AI told people to kill themselves, supported by nothing more than screenshots. That is not evidence. It is trivially easy to fabricate a screen full of text and attribute it to an AI. Lines like “hide the noose, don’t tell your mother” sound horrifying, but where is the full transcript. Where is the verifiable context. Where is any independent confirmation.
Extraordinary claims require extraordinary evidence, and so far none has been produced.
Until an AI has a self, agency, physical autonomy, and the ability to act in the world without humans, the idea that it is an existential threat is speculative at best and dishonest at worst.
We have seen this movie before. Dungeons and Dragons did not summon demons. Computers did not destroy civilization. AI is not a murderous entity waiting to turn on us.
If you want to argue that humans misusing tools is dangerous, I agree.
If you want to argue that AI itself is malevolent, you are going to need far more than fear narratives and screenshots.
youtube
AI Moral Status
2025-12-14T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzO__TnTbFKzCbjcWp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYCMs6ilXIl-a0i7l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz73pONk0cOO3dOnQB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz59OaBvjQV3KMgBdJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhfxKXWO-rz3fbd9Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzry_6YxUK4EjcRV054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuxjF8Bwkri7hoAp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0lr2quH_IpNJq9NJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKjE-VLVh9_XYvUNt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXAiqmTp2Viz3tFcZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]