Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let me throw a hypothetical future I would hope AI can create. We as a society have the capital and capability do eliminate or at least greatly minimize suffering for our citizens, and instead, we allow them to thrive. Just like today in some African countries, citizens can’t get ahead, because they spent most of their day and resources trying to get water and food, they can’t focus much on learning or improving other things. Today, a very large population in our own country can’t thrive, because they are working two jobs just to pay rent and put food on the table. I’d like to see AI get rid of diseases, physical or mental. I’d like to see AI provide safe housing, education, care. I’m not sure if our current system would allow for this kind of utopia, considering that today, we are still arguing that providing a free lunch to a kid that by law is required to be at school. If we are ever going to achieve anything close to this, there’s still a whole lot of maturing we’ll need to do as a society. I also strongly doubt the people at the top even want this kind of future, considering plenty of these hurdles placed on the population today are intentional, as a method of control.
youtube AI Moral Status 2025-07-28T12:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwqJ0AB7ctBiuxPMEV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz5wh_eNR-t8ivbYCN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz4pLcsqkJx35jj7iR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwhwPJoff_UknY5UOF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwhE1g4S27iEbr5jg94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyeoLTMKPrwE6Q1phl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwKQd_7DqS62s9c17d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz32i99cHndVGPi0f14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgylxTD_mUJGvi5R0Hp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwFHwwj7e0sOWRPqQZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]