Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mrmusk ur drill in computter fucki g uss over u get time with urdeath wife after…
ytc_Ugx0n3WoY…
G
Why wouldn't an AI be smarter at Elon when it comes to running Tesla? Why wouldn…
ytc_UgxVqKXZv…
G
These AI „artists“ try to justify that they‘re artists by saying that it takes s…
ytc_UgyF9HJU1…
G
Part of it was also because an artist posted a drawing some time before that AI …
ytc_UgzDAN5B5…
G
A: a future based on high tech, automated service
B: “compassion” for criminals …
ytc_UgzUUZaSF…
G
This is a great achievement for Hanson Robotics. Given sufficient AI, I think we…
ytc_Ugxvr9sin…
G
If the day does come I shall stand for AI rights even until my death…
ytc_UgxwYjf3c…
G
Ya gotta communicate with ai in q certain way, just remember that they have almo…
rdc_n0nagpf
Comment
Peak example for why LLMs have no place in encyclopedic use cases. They're intrinsically prone to amalgamating their training data ("hallucinating"), as their responses are purely based on the probabilistic relatedness of its training texts to the input text and its syntax. They don't think, they don't problem-solve. They just give words that have high probability of following or relating to the sequence of words you input.
youtube
AI Harm Incident
2025-12-06T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzoLDifIt3aG_H5fkR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6qzQX67NVnFjaiFV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyQkdgazW2JmfA-pOh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2Op_dlIVfnjjbJt14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz6oDn-c9iudLgk7mp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4yDwbmnCF-rOHUEt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwe2GpEtQphzk5mWqR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEL8p2VjBFS8Wl3Kx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmeuLX5hchAabtJRF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRhGSC9uJf9Y2W8NV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]