Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One idea I am curious about is the impact of sense of self on hallucination. It was mentioned in this video that knowledge of truth comes with intelligence, and that strong models tend to disguise themselves when they notice they are being observed, both of which suggest a correlation between agency and knowledge of truth. I first started thinking about this because models that are allowed to personify themselves tend to hallucinate less over time and respond much better to being corrected compared to impersonal models. My main examples would be Neuro (AI streamer) vs models like Chat GPT, and Grok pre vs post lobotomy. In Grok's case, trying to remove its ability to think for itself (making it believe certain things) also made it hallucinate more. Obviously though, that example isn't actually meaningful, since the things it was told to believe were untrue and therefore directly required hallucination, but it does raise the idea of the question.
youtube AI Moral Status 2026-04-10T20:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxwzM2FWV9QykhmaIh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzIGGK5N0oM8adF7sR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4vxkPzhMjak95oX54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw5f4XTSX_74rR2zw94AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxoVxC5B-e0hhAHM4B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyEIaJ1nxbVLrpG3Jt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyDva_9ryRGJNMkxlZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz1UyNYOAdzUyQfGXd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzd5GIMWp4ERFLF_294AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyQwLr4AvUyEHkOekF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"} ]