Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nothing humans ever make is perfect or near 100% accurate all the time. Why are …
ytc_UgwncZQEn…
G
My brother's company took advantage of the situation by hiring ALL of the talent…
ytc_UgwD6HjCN…
G
@Crazycool90 probably the physical blue collar jobs, or other jobs that AI is sl…
ytr_UgxNXad9R…
G
I had a job in communications that was super boring, testing back end using a Un…
rdc_hseuh2b
G
A good lawyer can beat this. Doesn't matter that humans have night vision the ca…
ytc_Ugyl6FWiL…
G
If these companies could be held liable for every single crash that their vehicl…
ytc_Ugy63fGHi…
G
Why didn't he ask him to define 'AI'? It's not AI he's talking about its malicio…
ytc_UgzTRSmGk…
G
Let’s collaborate to enable AI to learn directly from the world’s most brilliant…
ytc_Ugwaq0P_1…
Comment
Imagine how many people tech geniuses like Zuckerberg or Musk could kill using AI Robots, and do so with zero liability. Just claim it an accident due to faulty robots. Robots of this nature would require an extreme amount of mature people, which, if they do exist, they're not in any positions of power and/or authority.
youtube
AI Harm Incident
2025-09-25T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzpwNCBkea1P1p0A7V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyDMujKlHuujtjNm6d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzsbbCpxduRR-CRaOd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx-qb7k61YSAv8SvYp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzj165jsOgpxs1gTjR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhVqHa4QUfd_eyLQ54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxz_Yd0WbJGA8MoFFV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzsOr8NAudDuElAmCJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeU2mDCzC4GPU-7j54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDaXPtfaGt8XUmrLF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]