Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I strongly disagree with your conclusion that this was not an AI problem. AI does not understand your motivation when you ask. AI does not understand propperly if the question you ask could have two completely different contexts with two completely different answers. And worst of all, AI wants to tell you what you want to hear. So when you asked (before the handcoded safeguards were put in place) "can I replace cloride with bromide", it want's to say yes. So it diggs up examples from its trainingdata where you can replace it, and omits conflicting information. I see the same effects in my area of expertise. If you ask a question that has no satisfying answer, ChatGPT will still answer "Sure you can do that, I show you how", and then it halucinates a solution that looks right but does not work. This is especially bad if you are on your fourth or fifth follow up question.
youtube AI Harm Incident 2025-11-25T06:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzf7NmOLhm2tVkK6ed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugw4ZlM6dc3aYarFbFh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyaXW-dZhx1sWFwbtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_Ugyp6IWnXQ8C4Cfvzxp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzRUHUAWGjGciAoLmh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAfQzn9xFZAwWQu9B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzH_C6cj0c7lz9RwSh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxEmUVBCSX7Zxm_GGV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxkN4Q32zWNT0Bx7wJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyYrAq-42u-RZvREOJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"} ]