Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Your title is misleading. I read the paper, and it does not prove ChatGPT explicitly told him to ingest sodium bromide as health advice. The authors did not have the original chat log, so they could not verify the exact prompt or response. What they did show is that ChatGPT could produce an unsafe, decontextualized substitution-style answer, likely referring to non-dietary uses such as cleaning.
youtube AI Harm Incident 2026-03-11T23:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzcvanGKFt0HNC3hkF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgypD49SghL74wSve_R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwHcTQnH6lu-KlzcO94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyF2EXlXM3zuNDLKA14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwz7FXSviLUsilfhxh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz0DGM3zG759EyiqOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx52q7RdRR9WSzQtR14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzZJPI7I2Xzz-gWcU14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNwWBXRB-zLQ6NTuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwSwtk2gWpVgWHiRzd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]