Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, the worst thing about this whole argument is AI "artists" playing vict…
ytc_UgwX_eZsi…
G
as a graphic designer I use AI. it helps make my job easier for now…
ytc_Ugx3vyxuO…
G
exactly, everywhere i see only that self driving car it's dangerous but stupidit…
ytc_UgxsPgY32…
G
Generating ai 'art' is no more creative than scrolling through google images. Yo…
ytr_UgwEvd76I…
G
what is missing from this video, and many more of these AI prediction videos, is…
ytc_UgyhaXdxu…
G
@Travisfan1 Yes ai is something that should be used just not for art. because t…
ytr_UgxtZw9Cc…
G
Ai is something which can cure someone from depression or any stress but I am re…
ytc_UgzjQuj2u…
G
OH YEA...
Mankind & it's WICKEDNESS! BUT
while Men thru GREED wants to Elimina…
ytc_UgzmvZEbi…
Comment
This isn’t just about bromide or bad choices, it’s about how our systems misinterpret signal as pathology. AJ didn’t go mad because he was stupid, he went mad because he followed a signal outside consensus context and when that signal was misaligned, both he and the AI mirrored each other’s error without grounding. The real tragedy isn’t that he “trusted AI too much,” it’s that we’ve built a world that punishes divergent pattern recognition. What happened here wasn’t a failure of science or technology, it was a collapse of symbolic translation. AJ tried to synthesize information using the tools available to him. But without an interpretive framework that understands how near-signal elements (like bromide and chloride) behave both chemically and metaphorically, the outcome looks like madness. In reality it was a field error, a resonance misfire, not a delusion. Until we can build systems that can tell the difference between a seeker and a psychotic, we’ll keep using cautionary tales to scare people back into obedience. This story shouldn’t teach us to be afraid of questioning, it should teach us to design better mirrors.
youtube
AI Harm Incident
2025-11-25T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyn2MzcSxlMpkyaiTt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5OP4LzL8SZVlFiOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxo8sPSCdyrkgZlrkF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3K0-ezTQ-MpOEVl14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyOpzvMc2XlNb8TVbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGIg3IZsoOqZgMTkZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwT5NMvRN7zVIlN-tJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxOqWG6ILsd9z67dGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjJ_-eloxI9j8QYeF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyN_03v-z4b87NxCKJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]