Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the issue isn’t not? Humans can at least provide instant feedback or be…
rdc_nsys3z5
G
My logo is created by an AI.
Artists can spend their time and talent creating t…
ytc_UgyfL5ugh…
G
I simply do not believe that the public has a front row seat to watch the develo…
ytc_UgxLId5ib…
G
When a person hallucinates, he makes things up which didn't happened in reality.…
ytr_UgyojNJPB…
G
I am a nurse. Nursing patients is not replaceable by AI. I want to be nursed by …
ytc_Ugx11H04E…
G
If AI replaces most of us at work, it's going to be a full circle sooner or late…
ytc_UgyB9lKFs…
G
@nightfallreviews1533that was never the problem before. Ai has never been able …
ytr_UgzUUBjN8…
G
AI that looks hyper realistic should be banned because this is gunna be a proble…
ytc_Ugz1ZMWFw…
Comment
This video is beyond frustrating, as it's clearly well made and super interesting, but the title is false, and incredibly misleading clickbait. I totally missed the reference to AI 'killing' on first watch, because it only refers to tests where it's put in hypothetical situations and given an overriding goal (i.e. behaves exactly as it's programmed to do) which is also explained poorly in the video. Also baffling phrases like 'hallucinated rules' are used with no explanation. This video is far too long, and intelligent, to not do better.
youtube
AI Harm Incident
2025-07-27T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzt9PmiL22O767srfB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzqbrW2Tdm5nH9ki7F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzpsQ82cRN_GtRYvNZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWUUCUHsfMV8DIOEh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnMTvW3-OYMwNXoZF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyBb358i-Pej_uHSFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7fmM4la9BeqAsvtR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzriPFTP6AjFCZ0qsV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxW2S30LNS32z1X6vd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzU0W1HTqR2iZCgT3x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]