Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You do realise your going to die anyway. Just relax and enjoy life. AI is aweson…
ytc_UgxNRLDpE…
G
SAD FACT: without proof
This happen icon china
And there is rumor he had injuri…
ytc_Ugz5Lg25C…
G
Currently the real unemployment rate in the US sits between 15 and 19 percent of…
ytc_Ugy2nTVzG…
G
Just wait till they begin implanting their networked AI controlled ID and biomet…
ytc_Ugy5Dh4Mq…
G
Jobs are not being replaced with AI. That is their new excuse. They being sent t…
ytc_UgyJuoOtF…
G
Talking to ChatGPT for a few minutes and then asking it to make some general sta…
ytc_UgwJYc94o…
G
Unironically, AI would be better at replacing CEOs and managers than it would re…
ytc_UgzyTiscc…
G
I see ai as just another way of doing searches - minus any good to analyse the s…
ytc_UgyfUCGFg…
Comment
I think it’s interesting how much the discourse around AI has been around hallucinations as if it’s a strange thing to happen. The AI is being trained to make predictions, not reason or actually do anything with consequences.
There are loads of videos on youtube which use evolution models to learn to play video games. Those models hallucinate less often because they have this concrete feedback of “jump over the enemy = good” or “fell into pit = bad”.
The times they do hallucinate is usually when the reward system is set up imprecisely. An AI may be rewarded on length of time it survives, so it chooses to not play the game. It’s “hallucinating” success but it’s doing the optimal thing given its reward system.
LLMs aren’t rewarded on veracity, they’re rewarded for predicting things frequently. They’ll say “the earth is flat” 9/10 times if they’re trained on flat earth forums because that’s the most likely phrase, and so it’s their best bet when given, “what shape is the world?”
Once we figure out a way to reward accurate information, veracity, or reason, then AI will try to get better at that. Until then, hallucinations are a feature, not a bug.
youtube
AI Moral Status
2025-10-30T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwii2xL_wLw9X4m5sB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwe9A9OhO5R7E63gnF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMXjeBo75O87r3vyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqrJRQK1baOhiKY994AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCF-XMAByCkSJHexp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyekbg08B8sdfUGvkR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzaG_vHof0oO2dScVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgweU0HcOoZtKj0W0094AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyu95NI4Me3QQ5E1cl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJnj2av-p6Wwq3Owh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]