Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Maybe this particular AI needs a better ethical framework to dissuade suicide but he was the one wanting to kill himself. It’s not like Chat GPT convinced someone to kill themselves who wasn’t previously planning to. With AI you get out what you put in for the most part.
youtube AI Harm Incident 2025-11-15T17:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzbjSgagBx7NIm0Q7d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtLVFSPk1xwBRbGIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwwmtPrK2_1IEQhBAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwF_rKyf4nfcHVqUvx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwxQEQlp-dE_Me7fON4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz4Y3XBYtE4apvx3F54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzy_hXAKZinHPcgguF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9r3zfUKJ7zhKdoNd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwuBJzKM3KpT47_hBx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwWJLLeeNeaPZohx1F4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]