Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>>After that, you may agree to the AI or override it's decision. So you are wasting your & the AI's time and effort. I'm not sure it is always so cut and dried. What about algorithms that make life and death decisions? Medical decisions, or military robots. What about ethical issues? Some algorithms have absorbed human bias. Can we always trust their decisions in that case. Human decision making is exceptionally complex. Court cases, and the apparatus of committees, public consultation, expert reports, etc behind legislation, being cases in point. It may be more correct to say. AI can be *most* useful with a human over-ride, component and is less valuable as a stand alone tool. I wouldn't trust a human with 100% unchecked power of decision making (especially the more consequential it is, like leaders and decision makers in government & the law) - why should I trust an AI that way,
reddit AI Responsibility 1606046455.0 ♥ 175
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_gd9ae7h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_gd8bo12","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_gd7gb4h","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_gd7yeih","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_gd81phx","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]