Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing that is usually overlooked is the fact that while possible, many things are not doable for non technical reasons. Take, for instance, autonomous driving. While not impossible to have a fully self driving car somewhen in the future, that is not said that it will happen for a very simple reason: if a car self drive, the owner has no responsibility on what the car does so the price of insurance will need to go to the car makers who are most probably not willing to take that expense and responsibility for themselves. Stopping at the assisted driving level is a better choice for the car makers' pockets. One fear, might be the (illusion of the) loss of responsibility: if AI algorithms and models are black boxes, it might be possible to claim that errors are not "people's fault" but "machine's fault". But that is a very weak point and legislators might require a "person responsible" and this can make some industries turn to less "smart" but more controllable systems. Modern law systems require a personal responsibility and that cannot be given to inanimate objects like in an ancient Greek Buphonia (ref. https://en.wikipedia.org/wiki/Buphonia).
reddit AI Moral Status 1597041790.0 ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningunclear
Policyliability
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_g0yhhj2","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_g0yyw2n","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"rdc_g0z1d36","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_g13z1i0","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_g0z3gma","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]