Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I did a research report on this (not saying I’m an expert whatsoever) and yes FR…
ytc_Ugxylc32S…
G
Nukes are FAR worse than a Concept that is 50 years away from being realistic. "…
ytc_UgxqCFqcF…
G
Lmao 🤣 ??? Super genius??? And assuming he was killed by a multi billion dollar…
ytc_UgybZnl3j…
G
@CharityApple07 Picasso, Monet, Da Vinci were artists. That's just a luddite ar…
ytr_UgzRNn5e1…
G
Not to just randomly bring fiction into this, but I feel like this question was …
ytc_UgxjDv4Z1…
G
Lol and here I was, using AI art to make some cool last minute christmas cards 😂…
ytc_UgzULAmXg…
G
@thewannabecritic7490 what makes it slop? its better, more realistic looking the…
ytr_Ugzc2F8An…
G
translation: there is finally a winner in AI and its Anthropic.
good for them.…
ytc_UgxPXowAR…
Comment
When someone creates and programs the killbot are they held responsible when the killbot kills? What if they create 10,000 killbots that go on a killing spree, still no responsibility for those who created them? It's stupefying to know people are working day and night, being given ludicrous financial bonuses for making life on earth hellish. Maybe for humanities sake some people shouldn't be allowed to be billionaires, it's too much power. First instance of a robot killing a human was at Ford Motor Company in 1979. 25-year-old Robert Williams had his skull crushed in. Many others since then. Might be tough to go hop in your self-driving car after watching this video. So much intelligence, so little wisdom.
youtube
AI Harm Incident
2025-07-26T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxrzfEMPlbTNDUhgkR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugw7-KaK1bUCHZi_WLh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwJRU-ZqvE3bnmWfMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNfeK5HxcASvu0xqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzjeGfkkpINABwCy6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqKjfWqp4bJ4zem2B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyj0TRVPMWmT6BBpCR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz34l0MumeYuDyTCAl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxzq_GaEMAq68_o7iB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyO8ZH7IbCQ3BeX5AV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]