Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s an insanely good next character/word prediction machine. But people use that to dismiss it as something simplistic like an advanced web search engine, and that’s completely wrong. LLMs have innately developed advanced capabilities behind the scenes that make them stronger at predicting these tokens. To be the ultimate next token predictor, a machine would have to get good at math, logical reasoning, presentation style, etc… and that’s exactly what has happened. This level of reasoning in LLMs is only understood at the surface level, just as it is in the human brain. What are the next 10 words after “give me a script to argue this healthcare claim (that I’ve uploaded) with my insurer, given the background information X?” That’s all based on reasoning capability, and it’s not clear that is something fundamentally different from dreaming up new ideas or thinking new thoughts.
reddit AI Moral Status 1750948128.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n00mhqk","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_mzw01o8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mzw6999","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mzx2sgd","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mzw3xgs","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"} ]