Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Right now, we're using AI as a means to convert information, largely to allow humans to communicate more effectively. The path looks like this: \`Human has idea -> asks AI to develop it -> asks AI to make it presentable to another human -> presents to other human -> other human makes decision\` Soon, the person receiving the information will usually use AI to interpret it. \`... -> presents to other human -> other human uses AI to change it and support decision -> human makes decision\` Then, the other human will delegate the decision. And already, the original human used AI to get the idea. So, we can easily end up with \`AI recommends idea -> human askes AI to develop and make presentable to human -> human presents to other human -> other human uses AI to recommend decision -> other human uses AI's choice\` Why bother with the humans at all. It's just slowing things down and costing money? \`AI recommends idea\` -> \`AI communicates idea in it's own code\` -> \`other AI makes decision\`
reddit AI Jobs 1772012828.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o76q6u5","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"rdc_o76r31j","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_o7at9wf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_o7brruj","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_o7tp0uk","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]