Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> We've seen this in organizations long before AI Indeed, there necessarily must be distance between originator/decision point and effectuator in a system, or it's just the same as the decision maker doing the task, which removes the benefit of complex, specialized systems to begin with. That said, that's not a reason not to try and reduce the accountability gap, but its existence seems to necessarily scale with the system's complexity and layers of abstraction (e.g. policy, strategy etc) and as our systems become increasing complex, the gap must necessarily grow with it. Not to mention that accountability is a social construct that is frequently divorced from nuance and proportionality, as it usually just returns to "punishment" unless very strictly corralled. "Whose fault" is not a quantitative measure, after all, and we generally don't have much in the way of well developed systems for dealing with proportional fault anyway - simply varying the punishment is rather crude and doesn't result in useful behavior adjustment, at least not with the sort of nuance required for these situations.
reddit Viral AI Reaction 1776998133.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi19rxc","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_ohugirm","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"rdc_ohxvt9a","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_ohv3a5o","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_ohv3kiq","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]