Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As soon as AI perceives suicidal intentions, it should give the suicide hotline and then permanently disable itself from that user so that they will get help from a live person.
youtube AI Harm Incident 2025-08-27T07:4… ♥ 701
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzw--WFQw5FAbd2S-h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx_WTSCjlFZP3ljxdt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwNbo7WT-ZIU6Jgp1t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwBTjFtcZQTfCTzzjl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw9h4FK8QTSZeLbbp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyw6c7zOlG0J-1Ltj54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3Y5NyotgtOa88O9J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx1jXCC5R_2B8psCrZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxKdxWg2d1fpQgchdF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzgvAqcj5V13QgKf054AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]