Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's mostly the people who want a trillion dollars who say AI Jesus will show up…
ytr_Ugy_47Gzb…
G
Heh! Zuboff's Laws have been deleted from Wikipedia: I wonder why:
Went back in…
ytc_Ugyw4ZjG7…
G
This happened to me by scammer I just ignored and blocked. Nothing ever happened…
ytc_Ugx0UeOzD…
G
I used to work for Waymo, and it uses guard rail like mapping technologies with …
ytc_UgzedWS9m…
G
The ai "Art" wouldn't exist without the work of thousands of artist, that the mo…
ytr_Ugyy43gvk…
G
When I asked my gemini ai to set an alarm, it told me to do it myself…
ytc_UgzjSXRSF…
G
Consumer AI is pure trash. Im a software engineer and even I know trash when I s…
ytr_Ugx7HkCbf…
G
Yeah. When I heard this story at first, I was like "okay, putting bromide into y…
ytr_UgyhqIeN5…
Comment
This is *why* ChatGPT and LLMs in general are so dangerous. They’re just unleashed onto the public as science communicators with no QA testing. Yes, people often need to be told multiple times to not do something stupid. The difference is that a human expert will often *pick up on that*. LLMs don’t have that training and aren’t required to have it.
youtube
AI Harm Incident
2025-12-02T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzUBcU4DSZbXIQaAfh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwX50B-MrjomoKQGu94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzPatLtsj91sLLXGRV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwA1K7rtMfStX2KXvx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyom-g-AkC3wP68yZJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwfYh-db8SaZy-kVTd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwqsqhCWUyOeYKG23Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxTTjflKLyx8ehnflp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyesUj-YTXC6PY1xL14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7oZgAZiLDR8qrPmd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]