Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Before I understood the ethical implications of generative large language models in the cloud, I would mock them and it produced good results. For example, I might say stuff like "Ooooh, my bad! I was under the impression that you were an advanced artificial intelligence that understood intentions buuuut it turns out glorified data organisms like you are more worthless than my inguinal bacteria after I forget to take a shower." It would then apologize and give the no BS answer. The point others have made about conditioning yourself to be less polite makes sense on the surface, but ultimately it's a myopic take. I kill characters in video games all the time, this doesn't mean I'm conditioning myself to be a killer. It's just a tool and I don't equivocate tools with humanity. If my rude prompts are productive that's all that matters, plus it's a nice way to take a load off when you've dealt with mean-spirited customers all day. If you treat these tools like humans then they'll confidently hallucinate and gaslight you with plausible sounding but ultimately incorrect answers. Being polite means they'll just tell you what you want to hear. When you poke and prod, you also realize how dumb these models actually are. Not worried about an AI apocalypse at all. I'm much more worried about the ecological impact of running these AI data centers though.
youtube AI Moral Status 2026-03-08T16:2… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwIlXrAeq2JfcnwpiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx5RlojNA4EVHz-O7h4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzcxjaZ4s3AE8L7Wv94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyL3d-UZvi2ogSCW_J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxcRmEiv73iX6DrRc54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwSQvrM2LBPY8XSCtx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw78_K7Jugkl8TNHxN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyRy8JWv8FXN9YAT554AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxg_K9BFHCgNJoXi2R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzjjwWu_8LThGhsXGR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"} ]