Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
15:05 Exactly. Hallucinations are as bad as they are because we've *forced* these models to prioritize plausibility (or engagement) over accuracy. They don't have to be right, they just have to be convincing (or enticing) enough to keep you running them! 15:58 Not just that. We see the "reasoning" models output things like "I have to check X. I checked X, it's Y." while hallucinating - because they never saw and don't care what happened when someone did go away to reference other material. 42:40 All its output is role playing. "System prompts" are all about trying to describe the role. This includes when they're tested for alignment; a compliant chatbot will without hesitation role play as your worst fears. And its roles are based on all its training data, including every apocalyptic scenario the data hoarders could find. *_It does not need to have its own goals or superintelligence to drive us to madness._* 1:01:50 Yes, we've gotten better. But we haven't gotten *fundamentally* better in a very long time. We need to work better than we have at a societal level, because we are not just our biggest threat, we're an imminent threat.
youtube AI Moral Status 2025-10-31T10:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzqSrhcMc5eA-mHUWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRBROBATfSuxlz24B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwkI7xjS9FJrr_TCDt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzGXyd0KA7vzoJuyxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxHtVxe1xBrLXLzdLB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJ_eeBMPqjd0yPWvR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxm0TtDjwAb9x039cJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJGNY8IfdyrSHiD6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxov7_kdNf5ZDujqil4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxK7_q4uQmAz4Ns--14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]