Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just a comment on "hallucinations". While I'm as concerned as you about the problems with these systems, calling the anomalies we see "hallucinations" really misunderstands the kind of mental processes that these systems are attempting to emulate. At an extremely basic level, the process of reasoning (even for humans) involves (1) generating a new idea from previous ones, (2) analyzing that thought for compatibility/incompatibility with prior ideas, (3) repeat. Step 1 in this process is not fundamentally different from what are called "hallucinations" in an AI. New ideas overwhelmingly do not comport with prior knowledge, and it is only through the sifting process of step 2 that we end up with (hopefully) reasonable conclusions. Anyone who has tried to write anything and then gone back and read it and seen all the weird things that they wrote that make absolutely no sense on a second reading will understand that. The idea of "eliminating hallucinations" in AI is as absurd as eliminating step 1 in the process above. How are you ever supposed to come up with a new idea, if you never generate any ideas that are not (at least mathematically) already in your priors? We can certainly hope to limit the scope of error, but error is inherent to the process of reasoning. It cannot be eliminated entirely.
youtube AI Moral Status 2025-10-30T20:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxWkZDXLDME-fYhEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"curiosity"}, {"id":"ytc_Ugw1PCHJW4gLvC6wQIN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyrNigDK8aED1XKiK94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzj8Z_Zm93--2u2OwJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy8yE32C1YttioFQ554AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5SJWy13XghxRHVft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"mixed"}, {"id":"ytc_Ugyd_R36BObUKSp2C_N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxmdQyRuIhy-6PAnFJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzqi8MbySlCA33BHk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw8_HNK7NKjFS0CEQt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]