Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Decisions are supposed to increase in complexity / responsibility as you go further up an organisation. Example: if a decision is on the Presidents desk (or in this example the CEO's) it should be because there is no clear 'right' decision, but a decision has to be made even if it will make some happy and others unhappy. So do you think we are more likely to automate quickly with AI (a) the lower level yes/no type decisions Or (b) the highly complex multi variable decisions. There will be a point when AI is much better at answering the complex then humans, but we will have automated away all the other work before that.
reddit AI Responsibility 1690035223.0 ♥ 3
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jsygpoz","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_jsytybh","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jszzfx5","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_jt00ohd","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_jsxa8pc","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]