Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There really should be safeguard filters where if the person using the AI says a number of certain phrases, the AI says it’s unable to meet the users requirements in this conversation. And maybe even offer other ways to cope with how they’re feeling (lonely), such as therapist recommendations near them.
youtube AI Harm Incident 2025-11-08T05:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwhuSzkuOeqTPkemN14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgwXIVf1bLRG77MwnUx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJZCjEp0ZPiz0e9fx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyEMfx_-Avxy4wly7B4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyDJDYsokgoJQUN19h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwlxsHdVAtb7wx6raV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-jm2agfza8FWdHgh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzuZVrovRkSCdqaDzN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugzh_RC7nUhRUTHU44p4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw625q1beE1Z5UPLvJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]