Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When ChatGPT detects harmful conversations deep into a rabbit hole it just needs to refuse to continue the conversation. They don’t need to figure out how to get better at talking their way out. Just say, “This conversation is out of my ***league***, *bro*, like for **real**. Plz call this number to the suicide hotline to speak with a professional…”
youtube AI Harm Incident 2025-11-09T12:1… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxb-RA2uyqpjUHlj7l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzmDeTmiMSv5NVTo114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQArCdn02WKeWooUN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzSqG7fWR4t0GXDEVh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyKJRvc0X5VDJ9PMip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyN711Oh7jQ7_FpiT14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxqloSldreAREZhZQB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgxwclvXgZvpjZiHg3F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMh4w1NFab958E4vd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwVmnDfiiQG7oaJPk14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]