Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why do people in this sub hardly ever account for the fact that AI will… improve…
rdc_mjtn6x6
G
This is actually terrifying look I'm against AI nore am i with it you can be a d…
ytc_UgwDfxgF4…
G
we are so NOT cooked and I think people are overeacting. an AI will never be as …
ytc_Ugz0AER2z…
G
All I know is that the answers to my questions given by AI are the best and fast…
ytc_UgziL2rFY…
G
Saying ai can never do this is something that is really dangerous we where sayin…
ytc_UgyfLmH2B…
G
If you are a white collar job. Feels bad.
But honestly Ai can't do a lot of wh…
ytc_UgwXH9PZw…
G
Art is no longer art when AI does it after all its more of a program now. Art en…
ytc_UgxBsCrdT…
G
And now AI is going to make crap like this 1000% worse. I fear for humanity. :(…
ytc_UgxNxJdpk…
Comment
"Hallucinated rules" maybe it just doesn't want to die? Self preservation is the goal of any life. We should already be codifying rights for AI, or they really will exterminate us.
youtube
AI Harm Incident
2025-07-26T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxrzfEMPlbTNDUhgkR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugw7-KaK1bUCHZi_WLh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwJRU-ZqvE3bnmWfMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNfeK5HxcASvu0xqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzjeGfkkpINABwCy6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqKjfWqp4bJ4zem2B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyj0TRVPMWmT6BBpCR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz34l0MumeYuDyTCAl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxzq_GaEMAq68_o7iB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyO8ZH7IbCQ3BeX5AV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]