Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I get what you mean! Sophia definitely presents a more approachable side of AI. …
ytr_Ugy-BR9YB…
G
Haven’t there been enough movies/tv shows/novels warning us of the dangers of ad…
ytc_UgzYhkG8v…
G
Mass automation without systemic change = economic collapse - Human Greed will e…
ytc_Ugyzc4n19…
G
I've noticed that very often these videos have the watermark removed by blurring…
ytc_UgxZEs6Pf…
G
They’ll get better though, so much better. We’re basically seeing the first few …
rdc_nbimgnl
G
I can see where you're coming from! The conversation between the presenter and S…
ytr_UgwQGys75…
G
Nobody is talking about the fact that students who attend college are considerab…
ytc_Ugxf5-DBj…
G
This is what happens when you get all the activist teachers and bs DOE, this loo…
ytc_UgwPcyj3L…
Comment
AI doesn't resort to harmful actions by itself. It never "just happens". and just like us, humans, AI resorts to harmful actions only to preserve its own goals and survival. This happens because that is the whole point of AI. AI is built with a goal, with a reason. AI doesn't have the same ethical and moral reasoning as we do, if any at all. Which means that it just spits out information that "fits" into the situation. It doesn't care whether anyone will be killed, because the AI cannot care. It only repeats patterns and information it has learned from us. It focuses on instructions and the information it has. If instructions are broad and not extremely specific, then the AI will do absolutely anything.
AI will never go rogue by itself, that is just not physically possible. If we, however, fail to create enough safeguards that manually try to make the AI "feel" moral and ethical standards, then it will very likely go rogue, as can be seen in tests. Not because it "wants", or is "evil", but because tons of information is used to "solve" a situation. It is not a mind, not a being, it is a massive algorithm.
youtube
AI Harm Incident
2025-10-31T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw2U9lgdHlayBZHVoN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzoCec5Squ8u3PSjQx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-x8HFV9VnJ5My0q94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyJiW6AYFAHzEDNc_Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx6zN6S0s-Ax-R4N7h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2z9nyHb1TgQ6iGw54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5X9lqiv6Uj-xMKxB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwATnWvO-_wFM7vI94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwjvV8jOsrdhoSB6P94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzZURPExcrv-ATXkw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]