Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree w/the notion that the 1950's style skits that ppl are doing directing bigotry at bots and AI are just white ppl who really wanna direct slurs at other groups. I think everyone, white ppl included are frustrated and even angry that this tech is being forced down everyone's throats. It's threatening ppl's jobs as you've pointed out. Data centers to support this AI garbage are threatening the health, water supply and electricity supply of the communities they're built in. Everyone's electric bills are also sky rocking bcz of AI. There's a lot of rage out there and it's righteous. I feel it too. I fucking despise AI. With this said, I do avoid using the term "clanker" bcz I learned the hard way that it's horribly offensive to ppl with AI loved ones. Some ppl used that term in chat in my online bookclub. And the 2 ppl there with AI partners flipped their lids. I'm still utterly shocked that mods didn't do anything about it bcz they're very good at dealing with anything involving slurs. But they still haven't even added "clanker" to the list of words you can't say in chat. I always joked about it being a slur, but then I figured out the hard way that apparently it really is a slur. So I just don't say it. Bcz you never know if someone in an AI relationship is around.
youtube 2025-09-17T17:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzP7UubbpolowWFerN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwwzLjOj0CS8FTpXvB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxdxqOd2mygHi-QDj14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxI5r2TG5sqTBKoWgl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx5PrxzJnEpO2lxipV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwPkgJz0LYqkVLlCX14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxkoCvwU2hNiI3_nhN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyvck4ghsqjVq1Q1Gh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwhztwL0qpJIkHZBYd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyRwLmZUwiZArTGDfh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"} ]