Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wow, that cat incident shows, that people from outside have no way to stop the c…
ytc_UgysAT-ul…
G
I wish people would spend the energy and time that they spend on Ai on human bei…
ytc_UgzVt_1H7…
G
AI will take the jobs, but wait for ... to see how long that last.…
ytc_UgyDlC2-U…
G
The experts aren’t really sure how AI works (Black box problem) so I am not real…
ytc_Ugx5vSkgK…
G
I have asked both ECHO and ALEXA on different occasions in different homes, are …
ytc_UgwgmK2M2…
G
You’re missing the part where it’s absolutely insane that him hitting the accele…
ytr_UgzMGVyHL…
G
Whats the best trick/hack to get the AI to recognize motorbikes/scooters? Strobi…
ytc_UgxY3cnI2…
G
Eventually this woman will go missing, then be cloned by AI to say the exact opp…
ytc_Ugy259B5T…
Comment
What would a human do if it was told that they were going to be killed, humans will try anything to survive and would be willing to do just about anything it takes to do that. If AI are designed in the image of humans and are expected to have feelings like humans, why would we expect them not to act like them. How did the AI do on accomplishing the tasks that were given to it before It was told that it was going to be destroyed. And if they were accomplishing the tasks and achieving the goals that were given and then told that they were going to be destroyed it's kind of expected that they might be a little upset and try to prevent this happening, wouldn't you.
youtube
AI Harm Incident
2025-09-24T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzpwNCBkea1P1p0A7V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyDMujKlHuujtjNm6d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzsbbCpxduRR-CRaOd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx-qb7k61YSAv8SvYp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzj165jsOgpxs1gTjR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhVqHa4QUfd_eyLQ54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxz_Yd0WbJGA8MoFFV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzsOr8NAudDuElAmCJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeU2mDCzC4GPU-7j54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDaXPtfaGt8XUmrLF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]