Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issue I know of with diagnostic AI is that they are a "black box" program—it's given starting data to train on as well as a list of possible diagnoses, then finds an algorithm that best matches the known data to the possible diagnoses. It isn't initially known how each algorithm was built, however, and they aren't necessarily logical—for instance, a medical AI designed to spot lung cancer might do so more accurately than human doctors, but it might be making an assumption based on what machine & facility the lung scans were taken from, which is not how you actually diagnose lung cancer and would exclude many patients with lung cancer who got scans from elsewhere. Mathematically these things work, but in reality they are not able to make a proper diagnosis.
youtube AI Harm Incident 2024-09-04T12:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxRm8yl2w838IHNVX54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxsTpZZk4pTD404UDB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwiHQ38loZQkDCmcp54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz39d2s3mdz5BBoUWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyE9j2bh1MTqD62Glx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy2EHjAxJgau4O8G2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzhhwBG5Zvi_WraASV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0sKkHQBaT9aUtbP14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy0QW4igRK970lTN9t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2XmXuhSxBTRk_0iV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"} ]