Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We don't need to imagine how far AI can go in wars, just watch how Israel is usi…
ytc_UgzOLIBbM…
G
the tech workers who make 100x more than them will be saying the same thing soon…
ytc_UgyVNM81p…
G
Watched this for the second time. Fascinating. I’m building AI agents at work …
ytc_Ugy882UNN…
G
what if they are so aware of the stigma to AI they are just fucking with us 😂…
ytc_UgyUxzeQB…
G
The only real danger of AI is that humans will use it to harm and exploit each o…
ytr_Ugwbe0eG9…
G
Tesla is unlikely to ever have a very good full self driving system without scra…
ytc_Ugw86r_DM…
G
AI is dumb, it requires intelligent people who can use it to function properly 😏…
ytc_UgzusaEUC…
G
Although not a mind read, AI was def listening and played out at a random gas st…
ytc_UgxOfqK-E…
Comment
The key is understanding our own fallen nature & how impossibile it is for us to not be corrupted by power. Ai has this same flaw but amplfied. Mostly because we empart our own flaws into everything we create & just like each copy of a copy loses definition so too does everything we create suffers from a similar type of entropy.
youtube
AI Harm Incident
2025-09-12T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwPl1jOV3ey-nQ7L9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5I3D9rwkP1NaUsUF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzSo_sFlc2_AYto_eZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx40NFh-dA41ZTco2l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy42amplcokPoTnyMt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugxwwac0042v-Mn2JJt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgywotmoVNOOAkPntm54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwVv0xgoaRJ3vQH8Gp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx2n6Thjm66ereq4BZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwPZcsw1P06S7Xny2t4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]