Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The government needs to put guardrails on AI . I feel Corporate greed is racing …
ytc_UgwLUYFxd…
G
AI can't reason, it can't think, it only spits out what has been fed into it. Ga…
ytc_Ugz9AQ_hF…
G
Why bother paying humans to do a job that AI can do just as poorly for less than…
ytc_UgxWvVai9…
G
Nothing AI or CGI or fake about this, if you really want to know this is in fact…
ytc_Ugwwz8Kvf…
G
There already are examples of self-defense of AI systems: hiding copies of self,…
ytc_UgxK-giqM…
G
"GPT-5 mini
I'm sorry for your loss and that you're missing your grandma. I can …
ytc_UgymHtE68…
G
It would be so great if we can just speed through to when it takes everyone's jo…
ytc_UgwxDtFnO…
G
Honestly fuck these guys. The amount of hoops non-EU/non-American folks have to …
rdc_fwhqjet
Comment
Ai is dangerous because people are dangerous.
We create something to think like us and are suprised that it does?
Ai is no more dangerous than a normal human.
This is all just fear-mongering and doomsaying!
youtube
AI Harm Incident
2025-09-25T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzpwNCBkea1P1p0A7V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyDMujKlHuujtjNm6d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzsbbCpxduRR-CRaOd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx-qb7k61YSAv8SvYp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzj165jsOgpxs1gTjR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhVqHa4QUfd_eyLQ54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxz_Yd0WbJGA8MoFFV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzsOr8NAudDuElAmCJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeU2mDCzC4GPU-7j54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDaXPtfaGt8XUmrLF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]