Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First off, this is a changing period. People will soon learn to accept it. it's …
ytc_UgyQrfO0x…
G
tetrapack24
Humans (or any other logical being) doesn't have to create such thin…
ytr_UggAeEyGO…
G
PewDiePie explained this years ago ~ we are all going to die. More pedagogic....…
ytc_UgytP41L3…
G
There's a difference between the AI taking artworks and mashing them, that that …
ytc_UgwQwBhkT…
G
The creator's an ignorant moron. He sits there laughing and joking about AI bein…
ytc_UghS4Hz9x…
G
would good to have good AI in detecting liars in family and criminal court when …
ytc_UgytXXZfU…
G
Well if you want to also benefit from the rise of Ai buy stock in those companie…
ytc_UgzYOoAXD…
G
Hey keep on scrolling I'm just an AI looking for anti-AI comments... Just for a …
ytc_UgwlDO5Sw…
Comment
What a world we live in when Ted Kaczynski is proven to be right. I'll be honest, we are at war already. Its possible the only way to shut Pandoras box might be to wipe out the modern internet and start over. Hopefully some hackers realize the danger and figure out how to brick every hard drive in the data centers. I hate robots and I detest AI as well as the people making it. The level of our LLMs a few years ago would have been perfectly sufficient to drastically improve our lifes, but now we have gone way to far.
youtube
AI Harm Incident
2025-07-24T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx14HU_Cg8drWFz1bF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyKX7ESZL7UmppN1zR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzxqWrvuA_9uqZg1El4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzWWOZf1PZwIBgU1IJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRBowgu7brHvotvip4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzV0lGvTFTJ4uY1z6F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzh8CVJCGqOXXgTF414AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyRXDB1wImTh8L2jyx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzkg58rdFAXGZhQ6ch4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwCBXZEW8VxXzXo4L54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]