Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You both sounded either reserved or maybe speechless at the concept of there being people who don't see humanity as something intrinsically worth preserving. I keep seeing this reaction from prominent people in everything from politics to media and especially in tech. There seems to be this massive gulf in our understanding of the state of humanity in the current day. My worry about AI isn't what it might do someday, its what it is doing today. Social media, capitalism and AI are all HURTING PEOPLE. Almost all people a little, a few people a whole lot, while a very very few people are benefiting. I am worried about the canaries. The few who are being killed by these things, right now. They are killing themselves, murdering people, they are dying of neglect and exposure in our streets, and a growing portion of us will find ourselves in their shoes if we don't radically change course. Immediately. You can blame it on their innate susceptibility to AI psychosis, or condemn them for their moral failings or blame drugs or whatever helps you sleep at night, but it WILL be someone you love someday soon. It might even be you. The path we are on, even at THIS stage, is as unsympathetic and inhuman as any super-intelligence. We will not achieve anything of benefit to humanity in a system with such perverse incentives, no matter how cutting edge the technology happens to be. The inhuman system will continue doing what it has been doing: pushing forward and elevating the worst. The fact that so many are willfully blind to this fact is why I and many others question if (or even simply do not belive) such a species is really anything worth saving.
youtube AI Moral Status 2025-11-19T11:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgySxcEJMp9l9ZaI1Ll4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwQOmZkw3e_hySJSw14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJ7jRLbOJxw9hyYRp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxtoVq2yGiQSfGK04F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwXuqLoTqnGtRtzFop4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5jMnF_70HzNoBM9d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyuJoX-M97cR0PQ_dt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw1dUx-NEI99326TQJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJ3oVfL6G5OGrKval4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJQREHFjl7sTHzf3l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]