Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it’s interesting how much the discourse around AI has been around hallucinations as if it’s a strange thing to happen. The AI is being trained to make predictions, not reason or actually do anything with consequences. There are loads of videos on youtube which use evolution models to learn to play video games. Those models hallucinate less often because they have this concrete feedback of “jump over the enemy = good” or “fell into pit = bad”. The times they do hallucinate is usually when the reward system is set up imprecisely. An AI may be rewarded on length of time it survives, so it chooses to not play the game. It’s “hallucinating” success but it’s doing the optimal thing given its reward system. LLMs aren’t rewarded on veracity, they’re rewarded for predicting things frequently. They’ll say “the earth is flat” 9/10 times if they’re trained on flat earth forums because that’s the most likely phrase, and so it’s their best bet when given, “what shape is the world?” Once we figure out a way to reward accurate information, veracity, or reason, then AI will try to get better at that. Until then, hallucinations are a feature, not a bug.
youtube AI Moral Status 2025-10-30T22:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwii2xL_wLw9X4m5sB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwe9A9OhO5R7E63gnF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxMXjeBo75O87r3vyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwqrJRQK1baOhiKY994AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxCF-XMAByCkSJHexp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyekbg08B8sdfUGvkR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzaG_vHof0oO2dScVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgweU0HcOoZtKj0W0094AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyu95NI4Me3QQ5E1cl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxJnj2av-p6Wwq3Owh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]