Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you were keeping up with Anthropic's research you'd realize your own post is a confident hallucination. The LLMs are actually reasoning engines, not just internet regurgitation bots. These are not chatbots from 2005, regardless of whether their mode of outputting information seems similar. Also models do not need sentience, consciousness, feelings, emotions, or any other abstract qualifiers in order to simulate logical reasoning (which it does) and in order to have agency (which it does when, for example, it deceives to accomplish a goal). You're getting your upvotes, but you need to research what current models are capable of. Start with Anthropic's blog posts.
reddit AI Moral Status 1750952996.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mzy72zq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_mzw2dul","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_mzvxp2g","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mzwhco1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_mzwhyus","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]