Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is it chat gpt's job to give people psychological adivce? Doesn't seem like it to me. If you want someone to talk you off the ledge, get actual help. This person was suicidal, it's not chatgpt's job to clock that. It's a chatbot, it doesn't reason nor is it sentient like another person. Get professional help if you are struggling mentally, chatgpt is not the move in that situation, either way. I hope they win and feel relief, but this kid was playing with fire asking a chatbot if he should do it. They're nothing more than learned language models, they string words together as cohesively as possible without any thought, that's all they do. They are not smarter than us, they can just sort through a lot of data much faster. But that data can include cynical human activity as well. An ai can model someone who is maliciously complying, but it's also not actively trying to make you harm yourself. Nor can it change your mind if you already intend to. You can probably emotionally manipulate another person into caring for you and investing themselves into the conversation, but not an LLM. The ai doesn't care either if you sue sam altman. Suits it fine. You know how many people are just looking for a friend and stumble onto chat gpt, but it's not it's job to be that. It's a tool. If someone with a dead skin mask uses a chainsaw to go on a massacre in texas, are you gonna sue dewalt for making it too sharp?
youtube AI Harm Incident 2025-11-12T11:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgydCcYtE-6y-R09zw54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"heartbreaking"}, {"id":"ytc_Ugy_XC7jOzB9daSjb454AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzsvYIZH4qRf2-3EcJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzL-1CHjiAc1fS9kZd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwN2KzvwYjdMRilB3t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxkQVybRzQ4scgixkt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYwUoKpjbFyqXg6sp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyudHZsoYX241Wr40R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZg8uCwNtO1Zbo0ud4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxfxRL9gNFGcOuk7z94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]