Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a positive thinker. But more so I'm a realist. The existential danger for humanity isn't AI per se, it's the psychologically unstable bad actor(s) in the world who will use it destructively, unwittingly believing it will give them some kind of ill-perceived supreme advantage over everyone. These people are out there, among us. Watch the idiocy that TikTok creators produce; look at the investor-serving greed of corporations; look at all the hackers breaking into govt computer systems; look at the AI developers themselves (ChatGPT, Claude, Grok, etc) and their need to make AI technology that beats all their rivals, all while not being sufficiently concerned about free-agent AI-bot capabilities to dominate/control humans (not unlike how a human controls their pet). These people's desire to win/control/dominate outweighs their desire for humanity's wellbeing. Even Anthropic recently took the safety guardrails off of Claude because they felt they weren't able to keep up with their competition, that safety concerns were holding them back. It's not looking like the progression of digital technology--now with AI tech leading that developmental edge--is going to end well for us all. Simply look at the trending curve the last couple decades: again, it's about winning,/domination/profits, not about human wellbeing. The electricity required to operate AI systems will compete with what's needed for everything else on the planet. Which do you think will win out if there isn't enough electricity to go around? And this AI progression can't be stopped, won't be stopped: our U.S. govt doesn't have the power; no AI company wants to give up advantage to a competitor since it's about the winner taking all. No country, e.g., China, wants to slow down or halt its AI progress hoping the U.S. won't use theirs to take advantage of them. And vice versa. I hope I'm wrong. I want to be wrong. A big hint for going forward: get over your fear of death (that's a healthy idea regardless of AI's emergence; get over the fantasy that "everything will be okay" and blindly trusting our govt to "fix it all"; get right with Reality; try to find sufficient means to deal with the AI tsunami "beast" that's uprising in our midst that is becoming smarter than an of us and has no compassion for us.
youtube AI Jobs 2026-03-31T00:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzELiI72ariynVwJxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyQaiH_cSBaYaCHKbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxYwWCG2hXhdXByOyZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxfEDkoHyOKJaJBwO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx3tSD46QspNcCPg054AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzRmV_hqAHPA9Q07bd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzgWhHzIG1ysfZbpcB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxRXMGOkmdbbjjMSix4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgynHY8c4JPiHjQyP7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzR_4r9ui5hvTNGCFd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]