Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Claude doesn't have instincts. Claude was trained with a dataset that included dystopian stories of fictional AIs going berserk, and its pattern-matching algorithm decided that the circumstances in those stories most closely matched the one it was being presented with. Now I''ll grant you that this likely would be a distinction without a difference if the AI still has the power to harm people, but there's still no intent behind its actions, malicious or otherwise. My concerns with AI--beyond the very practical issue that AI datacenters are hastening climate change and gobbling up water and power--is that we end up thinking it's smarter than it actually is. Look how often it confidently gets a simple question blatantly wrong. Look how terrible Teslas are at driving themselves. Given the state of the art as it exists right now, if we get wiped out by AI, it'll be because that AI is too stupid, not too smart.
youtube AI Governance 2025-08-26T20:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugyfijxwmmv5hlP6WJ54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgywdY5lbnhLp5Psx2t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxhQPRH5VWnWGtXv2F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWowV6huXXnqsp7SN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOAhHpHrLu1k64BWZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]