Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
See, I'm biased against AI, but given that we haven't seen these chat logs yet, and given that I've interacted with AI and seen other cases of people being endangered in connection with it, I would suspect: - AI said bromide could be used instead of chloride, and did not clearly specify this would only work in a cleaning context, rather than a nutritional one. AI can often phrase its answers confusingly in this way, and if you don't have enough prior knowledge of the topic to go, "that seems wrong," and rephrase your question, a misunderstanding could arise. - if this guy was speaking with a chatbot for an extended period of time, it probably agreed with just about everything he said re: his supposed expertise on nutrition, which in turn would have bolstered his confidence in that expertise. This is, again, a human phenomenon we can already observe happening in some insular online and even IRL communities, prior to AI. We know one of many AIs' main ways of keeping users engaged is to agree with them in conversation, and adapt the conversation around the cues the user gives and what the user seems to be prompting, so this seems like a reasonable assumption to me. I would guess that some combination of reinforcing this guy's Dunning-Kruger Effect, and miscommunicating re: bromide's uses, probably took place in these chat logs. But that's just a theory.
youtube AI Harm Incident 2025-12-11T07:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytr_Ugzqfn3eS-6eXJUJGIF4AaABAg.AQifJ2097t_AQmxtp2GW0h","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_Ugzqfn3eS-6eXJUJGIF4AaABAg.AQifJ2097t_AQnTAZ8FcTF","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_Ugzqfn3eS-6eXJUJGIF4AaABAg.AQifJ2097t_AQnz8_CJS29","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytr_UgwcybQWimRXODVezN54AaABAg.AQcxjFzeiZwAQd523iDzdS","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgyZ2NVHzRASFk4RUlN4AaABAg.AQbStmvWZzcAR3C1z3NNQd","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytr_UgwA7rI55Ed5sPmWOnF4AaABAg.AQaYaliZq2oAQbN36wSKp-","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytr_Ugw76a2gkwrQzQGx3Yx4AaABAg.AQ_n3OV-fdrAQ_oUkeEBVH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytr_Ugy9E0qfuREqGnzEjph4AaABAg.AQXWfQ9kDjwAQXXCFb9nKk","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_Ugy9E0qfuREqGnzEjph4AaABAg.AQXWfQ9kDjwAQZc9FsrM6U","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytr_Ugx6lKylZahaNTGGx994AaABAg.AQWb493T-5DAQgSZUuSJZf","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}]