Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For all the Yalu of what AI can do.. will humans allow it to actually do the necessary things to benefit humankind when it’s time? For who knows how long we all have known the issues of healthcare, clean water, housing, environmental sustainability, etc. but it’s always hemmed up with paperwork, money and all this stuff that places logic of saving humans and humankind on the back burning for profits. Are those in power and benefit from these insane amounts of revenues, going to just step out the way and let AI heal the world? Like I personally can’t imagine all these innovations in healthcare would could at a cheaper cost for the average person who in this day could be sent into a bad situation from a trip to the hospital and the crazy expenses you can rack up there. AI innovation is to make healthcare affordable in say the west where it’s lucrative business? Can place that same question into various topics and it remains, will humans ever get out of their own way, even when before AI, we’ve clearly known the answers but money is always the centerpiece to move anything? Like why wouldn’t AI think the concept of money, a made up concept, be a hindrance in solving physical problems that mean the survival or death of a species? Or why wouldn’t it just push some fake zeros around and make humans do what’s necessary to do it’s job, knowing humans operate for monetary gain before survival.
youtube AI Moral Status 2026-03-02T21:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwGa3XRkMYeYoRCgHd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyD527ItMQH-D4pYGt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwFHtuA0IijlC_-04N4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyTGHJyu2SQixoM5EN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzgGY37RfV1dRRTuxd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwJAO5tZ5U2Pxx-o8F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx_oPNfNqyGY5GnecR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzDiMPNiM9fhLFy1Qd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgytwZPaQQRo3vxRgdh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwnMBo4cl1TGXoeuAN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]