Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean I felt like during interviews I’ve held it’s been easy to weed out those …
rdc_mojabt6
G
Sounds totally safe & effective. If I was A sociopath/psychopath!!
Ensure the s…
ytc_Ugyb24ijY…
G
14:56 this whole section seem disingenious to me, the same could've been said fo…
ytc_Ugwe9SxuW…
G
the problem is @0:48 "nope" .. you can tell when 80 IQers are working on things …
ytc_UgyfwhoqO…
G
The argument is not that AI self driving is safer around human drivers, it is th…
ytc_UgwGYDNYO…
G
AI removes jobs, 300 million unemployed due to corporate greed. Eventually the q…
ytc_UgxhlP7Rp…
G
1:21:37
People keep saying “we just need to control AI,” as if control is some …
ytc_UgwJJ2ldz…
G
I'm 49. When I as 16 it was computers that would end up replacing people. Later …
ytc_Ugzdze_f_…
Comment
Who needs skynet? AI will just get the humans turning on themselves and each other
youtube
AI Harm Incident
2025-11-11T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwxsJv_8uIjxpxhvpt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyJ5R415hFm3Z8ToxB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyrs2att-jg9iBUmJ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzakWTn4J1zvihdRkh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4mzEf1ltC4RQhRch4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytCiWip6F3HEehtKd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxH7hbPMuCWKWIfhGp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqGL05MJPRenqQyIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugx3aXPedfGd5ud03gB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwnldmKcV2w5SurzYl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]