Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI is to "improve humanity", surely "solving racism" is part of it. (Prove: Even poor people are living better lives today than the most powerful kings, not-that-many centuries ago. Like, unless you are the poorest person in your area, you have access to daily warm shower, there's a supercomputer in your pocket with instant access to 95% of all human knowledge, and you can talk to everyone you know in 5 seconds - by calling them, if you forgot what we used to use phone for. No king could say that and you don't have to travel a full century back in time for that to be true. Without equality becoming a (imperfect) thing, Apple would never invent smartphones, because slaves would not buy enough of them for Apple to exist) But racism is held up by billionaires (even if they are not "true racists", they surely do believe that being white males is what helped them to become billionaires). So part of the solution must be limiting billioanires (not exterminating, absolute wealth equality is also bad, as anyone from communist or post communistic country can attest to). But AIs are held by billionaires, so they will inevitably push it away from anything that helps themselves less than not-the-1%.
youtube AI Moral Status 2025-11-01T09:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzV2V0G7yDp1WKgOBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8iSTyp9NyAVDi8ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTM3_p9Zs990HgOTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzhHjhvvE2BcdADgod4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy9T-w0Clu46j3hNnB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxTGxgEQnozxdvv0Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzxn4xso09goJWUAI54AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwptZLUKuh6knkJaW54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw928RQgF47WVOLCOd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwgcqhLlRnka9l9Ia14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]