Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I wouldn't be so certain. Obviously the important bit is "If programmed correctly" but that could lead into a No True Scotsman debate so let's ignore that. But as they are now machines are actually far more likely to be racist than humans. Mainly because they look for patterns even if they shouldn't be there which is almost the definition of racism. Add to that an already racist justice system and you get racist robots. To massively oversimplify if you show a machine lots of faces of convicted criminals it's going to notice more are black than it should be. Not obviously understanding concepts like systematic racism it'll just "think" black people are more likely to be criminals. https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
reddit AI Moral Status 1616675330.0 ♥ 36
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_gs5uuf0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_gs8kslk","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_gs5wuoo","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_gs5vbh6","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_gs61ghq","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]