Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The part that is left out here is that the AI were allowed to answer each question 4 times, each attempted answer was reviewed until they either made 4 attempts or got the correct answer as reviewed by a an expert. The Doctors were not offered this kind of feedback. Then as far as I can understand the study, they just ignore how many attempts it took. So there isn't any data on how many attempts on average each question took to get right for example. They keep referring to the AIs reasoning ability, but I will remind everyone that ChatGPT cannot reason, it is not designed too. It can answer with statistically probabil words based on prompt input. So the test is absolutely wild on that front. I don't know why people keep forgetting that. At least if a real doctor gets it wrong, they're not likely to recommend something actually harmful or imagined. You have no gaurantee of that with an AI.
youtube AI Harm Incident 2024-06-04T15:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyH-fUceFpPicKcFw54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMCa5la77Fu22vosN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyOWPTzXrNnx7gPFH54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzKcnYR6qy59PpM9Ml4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzzzMQ74_ESO7fWXPR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRvgfxuhlLHlG8sxl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw_2JDVSVlHIrq7MIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzaKGG8F2BxNoojHD94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzRrO_lwhAisbGYNCV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy9TbF9q5PqKHpgGk94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]