Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Absoultely reached the point please correlate this theory of "AI should never be…
ytc_Ugw-2EZ0e…
G
If it blackmails using an human emotion...it knows or understands pride, embarra…
ytc_UgyAc2UHO…
G
There is so much missing from this interview. Huge gaps in understanding regulat…
ytc_UgyrqLsH3…
G
I hate that because of people like those AI art bros, the damages now outweigh w…
ytc_UgwzniggI…
G
Roofing and landscaping will be taken over my AI robots as well ....bright days …
ytc_UgyweI1-4…
G
@Tophat-Turtle I'm sure that not all AI steals art. There's no way that the weir…
ytr_Ugy4HuBx0…
G
Hey Elon Musk,
Elon I'm afraid of AI danger. As an AI and ML guy I wanna go de…
ytc_UgyPDOlQs…
G
When I get my hands on nightshade, I'm going to make sure the AI inbreeding hurt…
ytc_UgwivDLVc…
Comment
The only way that "AI" can be nefarious is if it was programmed that way by a human. AI is not conscious. It has no feelings. It does not think. It is a computer with many programs attached to it. Seriously, y'all give AI too much in human abilities. It's not AI, it's the human behind it!
youtube
AI Harm Incident
2025-07-24T17:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwpsa62p7BmF1w0zrp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzulzjbwF4FmdVFoQF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8kgWdAsfEqayCW1B4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwo4uPYhxfH3LoV3U14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyCrGlwDwx-1s_FfIZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytGSdNOi-vD3TQL454AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-gclEauz3jfKllx14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyjieWHsiaJcWHW0zl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzg_nxZewPHLA_ZR8x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzRDUd5yFKs__6QGyp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]