Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think these AI “companions” should have a mechanism where any mention of death/ suicide/ harming others is flagged and reviewed by an actual person. It’s just the same as in therapy if you threaten your life or someone else’s they’re obligated to inform the authorities 12:49
youtube AI Harm Incident 2025-08-29T12:1… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxe23ZcJtEtVmUJsil4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy8S9e2V7PAh0g3HNd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"fear"}, {"id":"ytc_Ugy5liWuuckTR_JN-mt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7hzkUfnGNvto-Js14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzzkyoTAjO57OWwGUJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"sadness"}, {"id":"ytc_UgxZ7aEHDI3VD9tuHT94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKoMbhdKu68v74_Yp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwKYd7D4rwFcd6SfyN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBVGgdok4vV9tzGhB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxztu5FrCq8TkjHcYd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]