Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this is a genuinely important distinction that gets lost in most AI discourse. the whole AGI framing is a category error. we keep asking "will machines become conscious" when the actual question should be "what values are we encoding into systems that already shape decisions about millions of people?" the alignment problem everyone talks about is usually framed as "how do we make superintelligent AI care about human values." but the harder problem is much more mundane: we cannot even agree on what human values should be encoded, and the people building these systems are not representative of the people affected by them. the moral intelligence angle is also fascinating because it points to something current models are structurally bad at — context-sensitive ethical reasoning. claude and chatgpt can recite ethical frameworks all day but put them in a situation with competing values and they either hedge endlessly or defer to whatever the system prompt baked in. the real risk is not rogue superintelligence. its boring corporate deployment decisions made without any ethical framework at all, scaled to millions of users. that is already happening and nobody is stopping it.
reddit Viral AI Reaction 1776989733.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohwbqpk","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_ohx6fhf","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"rdc_ohxye78","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_ohydzo1","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ohyv3kr","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"} ]