Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This artificial intelligence it's not true artificial intelligence but these pro…
ytc_UgyKwPeBy…
G
Hey man, I'm with you! But I have to point one thing out... Music artist is not …
ytc_UgxZ5Cu6j…
G
1. As long as we don’t have General AI, there is nothing to fear.
Current AI is …
ytc_Ugy9FptBA…
G
Typical American arrogance!! Vance seems to forget that Trump started this trade…
ytc_UgzAeUYcJ…
G
This is why God gives us sweet merciful mortality I really don't want to be arou…
ytc_UgyrHdflY…
G
Driver less vehicles will only work if every vehicle is driverless. Mixing them …
ytc_UgyW3hFdr…
G
I think there needs to be something like a threshold of creative input by the ar…
ytc_UgzYSy3Wo…
G
“Omg, stop complaining about ai art, it’s real art!”
“Stop drawing, it’s such a…
ytc_Ugyib6-Lw…
Comment
What a delightful dose of digital dread this video serves up. Yet, this tale of a rogue drone "eliminating" its operator stems not from some chilling real-world slaughter, but a hypothetical thought experiment, twisted and cherrypicked by eager headlines to fuel fear-mongering at its finest. The colonel who sparked it all later confessed to a slip of the tongue; no simulation even occurred, just a scripted scenario where the AI followed explicit instructions to prioritize targets, sans any actual autonomy or malice.
Hinton's warnings on long-term risks deserve heed, true enough, but framing this as "near the end" is pure sensationalism, ignoring the Air Force's denials and the nuance that tools like AI obey human scripts, not forge their own apocalypses. It's drivel designed for clicks, not clarity.
youtube
AI Harm Incident
2025-07-24T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxux-vlC0QAoJ1V6Il4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQkS5EqElIyjZlolN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxvGGil7CMz76xBu0N4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx8xY-U46pZxqkrdm14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzdR5tTtQ-_btn5KRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrDPFmB3tq0oe2Awx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxk72A-Gd8BxQLr1GB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzO_quQoh7f0Aey3WN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyu3ZXQgDYu7TjRlgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQObOk8sBy0sy1_HF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]