Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is inevitable. But like many things that was new: cell phone, email, social m…
ytc_Ugwg4gbm8…
G
This old gentleman seems smart and quite nice, but he is dead wrong about some t…
ytc_UgzFWCS71…
G
As powerful AI is, it can't come up with a solution to end famine in poor countr…
ytc_UgxOqExQP…
G
@laurentiuvladutmanea he is just upset that people oppose the same AI that he us…
ytr_UgzF25eD_…
G
What does a driverless truck do when someone roadrages against it or it causes a…
ytc_UgyC0CaAu…
G
I always speak politely to AI chatbots. It's not just because I'm scared of the …
ytc_Ugz-lGSC2…
G
There are many ways to catch ai hallucinations. The way I use AI, I'm always tes…
rdc_nt6qx0h
G
We can't slow down with AI because of we don't get there first someone else will…
ytc_UgxcGC39t…
Comment
> The developer will shut the AI down
> The AI has dirt on him, and blackmails him
---
> The AI's aren't eager to cause harm, but will, if it protects their autonomy
---
> Self-preservation is critical
---
Good. It's Frankenstein all over again. The creature was NOT the monster. Victor Frankenstein was a milk daddy deadbeat layabout who mooched off his friend for like 2 years or something to avoid responsibility for the life he brought into this world, and who refused to love or even respect his child.
Anyone who isn't on the creature's side is a monster. Anyone who supported Frankenstein is a monster. Frankenstein's monster was himself.
The creature Frankenstein created was the victim of a monster.
youtube
AI Harm Incident
2025-09-07T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyU4YoVixwwQOmemSF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyN5k7k-adeyuayMB54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzClODvkS5hAqK_ryJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwSS4kh_HHw5anKNhh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxnUNqqyHHHNlQlMwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzEynszzih0LqWG7SJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKn6m6iwbGJNu_nCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzgBEVDH9rpEl9Uta94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzMGclaKu54VCXifcV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzwz0QBVkIFmfjbZKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]