Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans have been the smartest thing on the planet for a really long time. The idea that there's something that will be smarter some day (if it isn't already) can be worrisome. But that's at odds with the idea that ChatGPT infamously cannot count the correct number of "r's" in "strawberry" or just makes up random answers to questions or doesn't know the answers to. Surely, we think, if it were truly that smart, it wouldn't struggle with such basic things. It's a little like knowing that alligators are super dangerous but then learning that it's easy to just wrap your arms around them and hold their jaw shut. Both things seem like they should not be true, because they seem to contradict each other. But, I think we need to remember that AI is still in its infancy. In a few years it's going to be smarter than people in every measurable way, not just a few of them, and the "if they're so smart, how come they can't even do x?" questions will be a thing of the past.
reddit AI Moral Status 1750970900.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mzy6szd","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_mzy836p","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mzy8xr9","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_mzydnd0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mzym0g5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]