Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> For example, he said, AI could help identify all the resources a nearby hospital has — such as drug availability, blood supply and the availability of medical staff — to aid in decision-making. “That wouldn’t fit within the brain of a single human decision-maker,” Turek added. “Computer algorithms may find solutions that humans can’t.” That is NOT a matter of using AI. It is a matter of ACCESSING THE DATA. Very simple, almost excel level algorithm could help do that. As long as I had access to data of nearby hospital. Without such access no AI will be able to aid the decision. And for life and death decision. My rational brain would say - it is obvious, we need to use AI, as it will make OPTIMAL DECISION. Rational, optimal, pragmatic, calculated... And here is a problem. People are not rational, optimal, pragmatic and calculated. We want to believe we are RIGHT. That we made a RIGHT decision. Morally right. If a medical staff is marking wounded soldiers foreheads (rescue/do not rescue), we are assuming they are making RIGHT decision. Same with delegating resources to maternity ward vs elderly care. What ever we choose we want to believe it was a RIGHT THING TO DO. Ok, so how do we define right? What would be those fuzzy "success conditions" that our AI system will try to reach? Most life-years saved? Then one 5 year old is worth more than two 40 years old, and more than six 60 years olds. And pregnant woman with twins is worth 3 time any of us... What other metric? I have idea. We can feed our AI with data about all decision made by humans. That will do. We will get a superficial, probably racist, motivated by money AI... Perfect.
reddit AI Responsibility 1648753085.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i2vtqv7","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_i2vtte0","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_i2vzzqr","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_i30zpb4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_gso3cp7","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]