Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Say NO to AI !!! It have kill 29 Humans in Japan !!! If you dont want to Die als…
ytc_Ugymxl7Q1…
G
That LLMs "merely predict the next word in a sequence" is often given to imply t…
ytr_UgzcQDYrb…
G
Let me clear things up for you: OpenAI boss Sam Altman, Mark Zuckerberg, and Bil…
ytc_UgyAYArak…
G
Except just like the AI can look at your work, so can any other human novelist m…
ytc_Ugzly116I…
G
I think it it will do what it's taught is best. If we're not teaching it value f…
ytr_UgwEdMHco…
G
If AI is so evil we should shut it down now but humans in power are retarded…
ytc_UgzD7TeKt…
G
AI alignment IS impossible. And the chances of AI destroying us in the future ar…
ytc_Ugzue5mCK…
G
And that’s why Elon wants to create civilization on Mars. He knows ish may go le…
ytc_Ugw9RnBRW…
Comment
Ask an AI: would you rather let someone say the n-word or detonate a nuke in a populated city?
The AI would rather cause genocide
youtube
AI Harm Incident
2025-07-29T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwQ_KdK82LKRbeQasV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6IbZxYdi1NM-T3nt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwZcWwoiDV1RF7R4PB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0Y039DCjjwnWI66Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAwh3Di5sz3FGPj4l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7PNpavU0gZ4KpB4Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYZBKZZLk7P84Sd6x4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQuuvHc_nxdhqUXm94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzWsh1axYrktImEmM54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyFJ5Oa5cqrlMRCedp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]