Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
5:22
"AI can generate codes
but it can't navigate ambiguity"
This line might h…
ytc_Ugyfx6js0…
G
To add to that excellent question: **Should human preference for anecdotal evide…
rdc_cthpngw
G
AI will never have the lineage and access that a human does. Ever.
Human beings …
ytc_UgyKhCYHV…
G
The greatest danger of ai is it being strictly for profit. Universalizing knowl…
ytc_UgzcyZ4e5…
G
*The AI voice sounded so monotone, it has your tone but not the same human expre…
ytc_UgxJxQeBI…
G
If you don't have the time or skill to drive a car then stay home. There is no r…
ytc_UgyzfVpiq…
G
AI artist, lol. I don't think it counts, as you being an artist, if you just tel…
ytc_UgyxhYyT8…
G
Hi Michael! It's the same with CPAs! I think AI and advanced technology may have…
ytc_Ugz9SZoVR…
Comment
It's not AI, it's the human who programmed them. Like AI is like a child, a smart child, if raised well, we are good, even better actually, raise it badly and we are going downhill, it's always the parent, not the child, and it's more of a tool, so the blame always goes on the user
youtube
AI Harm Incident
2025-09-04T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzMb2OxiiEgG0_JyZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDiqVLhXKADVi-Hyp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOGp2xGhQWxNe1tNF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0iqwXHFrLL5Z97RR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx2ZOLqO-nspmgo_hZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyQs-GBZIJaeoCoFCl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgztRvDUxHWY0qPrQzJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgygU4X9faQRpdW9xlN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqRKPnBHop_3OvtYx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHgj5vj72BRXzCDZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]