Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I appreciate the attempt at simplifying a complex field of computing science. What was missing is commentary on the ethics behind the use of emergent technologies such as AI. This matters because training an AI system requires both negative and positive outcomes to be fed back into the system. The subject of care must provide their informed consent AND understand the experimental nature of the technology. The regulatory frameworks in place, exist to help ensure that negative outcomes are reduced (based on experience) AND that there is a responsible person for the decision making that led to the outcome. Who is that responsible person with an AI system? The clinician who blindly trusts the AI, or the engineer who coded the algorithms? As a health care professional and digital health leader I suggest that until those aspects are established from a medico-legal, liability, education and socio-technical point of view for clinicians - we should be rigorously challenging the message of simplicity put forward here SO that AI can be a trustworthy tool.
youtube AI Harm Incident 2023-01-16T02:3… ♥ 53
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyPC9oD4c-VNQt32Yp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTsskp3Wt431jO0ZF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxspuduEXafDs1CfBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwzEVsldhtgbA3OX014AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzt6LVhJKf6zq4_0QV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzGGTuEXltTylPWlbh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgymbGFVHjlFNcQ7GEN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkQBs7Pi9fjLsqbT94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwy2byRzHh5vXFMz-N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzySHsGMUTWf6H8xSx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})