Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Accountability is also an issue I dont see talked about. We can hold a human accountable. But how you do do that for an AI? If something goes wrong, and there is no human in the chain, then there is no way to get "justice" or closure. You cant "fire" an AI. What, do you remove an AI and replace it with one that now knows not to make that error?
reddit AI Responsibility 1606066510.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_gd8fpcs","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_gd8htd8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_gd8jjnz","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_gd8ks4p","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_gd8kx8e","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"outrage"} ]