Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A documentary showed young Japanese teens saying that their current culture is still caring about how others can be impacted by their actions. They care. Result? For decades Japan, other Asian nations and Nordic countries have had a declining crime rate. What's to fear with AI? The same we fear with selfish, "me first" education that puts first the well being, fame and wealth of these few individuals above anybody else. YES, AI can be educated in ethics. YES AI can (like we do with our kids) be indoctrinated in helping the needy, weak, and strong alike because life is priceless. However, up to recently, AI was being built in a technological race for computational power, and the award-winning prowess of self-awareness (they wet their pants over that one). Just look at Isaac Asimov's 4 laws of robotics. If AI is not following some ethical rules, some restraining concerns over human welfare, then the source of the problem stems directly from a very dangerous, criminal bad parenting. AI can be re-programmed to do well, but we need to demand the programmers and their corporate owners to adopt and adhere to ethical boundaries themselves first. Criminal parenting begets criminal offspring. But it can change. It may not be too late, yet. Cheers!
youtube AI Governance 2023-07-07T23:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy33K6tFNrvXkSbw5F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyi-KA9XwA8X5SbI2V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxWABXiXj4cktaia0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzIhSiSvX89M6RgcGV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz9ImhFJaY6LbhMij94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw-tC4feP1IkR2CqHp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxiQCXMlcwB1qUfRlB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxtv-zNJogj9k392-B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwqt_CmHKeF2RGILaJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgzxunE2Bw4fz-5SnRZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]