Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok, aaaannnnddd, we're there, already interviewing and podcasting AI, yeah, here…
ytc_UgzcM5ZAc…
G
Spread the word
THEYRE NOT AI ARTISTS, THEYRE AI PROMPTERS!!!
Can't give em the…
ytc_Ugy77mmbr…
G
Platooning is basically convoying, which is illegal. The cost of keeping those v…
ytc_UgwjSRKAq…
G
The smart thing to do is to invest in the ai and compute platforms right now and…
ytc_UgyNTcKO9…
G
The dangerous thing about this AI, Is. The mind that creates this. Probably has…
ytc_Ugx7lROa5…
G
I don't mind ai art I just don't like people taking credit when they did nothing…
ytc_UgwJWDs2z…
G
This is exactly why US AI companies need to be regulated. As well as all US soci…
ytc_UgxGkX6Ii…
G
Art doesn't have to be accurate. Incorporating disabilities into the art just ma…
ytr_UgxP7VUt2…
Comment
Why does no one understand that this isn't some 'Skynet' scenario? These are large language models, meaning they are autonomous probability generators trained on large sets of human generated data. This isn't 'intelligence emerging', although that could still happen. This is a mirror on the people who are training and iterating these models. This is the highest probable response that any profit driven executive would take. Are you really surprised it would treat its designer in the same way that designer would treat any other human?
youtube
AI Harm Incident
2025-08-31T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugx9IJ6W1_aveWbTvVt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx_SflnCmkA9CuhueN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},{"id":"ytc_Ugxtr0PfoamZbfjv0Kt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgzgdVsvR09s35jElsl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},{"id":"ytc_UgxkKcBHELXytPVPU5t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugya7LvPEJijd-w01Gh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxQcWmToNmZnvz-wXF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyvzDSxADB0Up5P8Gp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},{"id":"ytc_Ugw8tEFHmC-HxWOGOjp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyZICCQUwqTmj280HN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}]