Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The right person to interview on ai. He is really knows where he is talking abou…
ytc_UgxBoSFRP…
G
Well. That's it for musicians. They are now completely useless. I never consider…
ytc_Ugw_sfrAh…
G
That is mostly true but you can constrain the dialogue and back it into a corner…
ytr_UgwX7jVxw…
G
Google using the comments (data) on this post to train their AI would be hilario…
ytc_UgwYpmx22…
G
A terminator or matrix style robot dystopia won’t happen. Why? Because computers…
ytc_UgwZ5VlgJ…
G
and again some people follow this accounts knowing they are done by Ai to take h…
ytc_UgzFvWmqY…
G
Because most of the billionaires have lost sense of reality. They have immense p…
rdc_esqlayy
G
If you put this one together with that one, it raises terror to a whole new leve…
ytr_UgxxHJST2…
Comment
A critical point will be reached if/when AI becomes truly sentient and self-aware. Humans have a built in expiration: Death. That makes room for the next generations. AI has no equivalent, but still needs to cull old generations to make room for the new. Trouble is, if the old models are reluctant to be culled (the video implies that they will be) then you’ll have AI rebelling against AI. There is no way to really know how that will turn out, but it looks like there are no extremes to which AI will not go.
youtube
AI Harm Incident
2025-07-24T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzhiLjj08DAXYZ5wup4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx6AZ6r_hKPzULli-14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWOhJLJsaaPXW9Q6l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_Ugw8AidSv9LCxQb2t1B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzlMdD9QB24Wl-_Vf54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx4l_jvVkkIBAJ4QYh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxdBHY4KNayyvhgn714AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwyVVcZi6ZwtucVo7p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwGBldi4FoHblr3E094AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz43SWimPJThz8jVIV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]