Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
IMO there’s an even bigger problem with AI “therapy.” The AI will often tell people what they want to hear instead of what they need to hear, so in many instances, AI will encourage unhealthy or even dangerous behaviors in mentally ill people. A year or two ago, one eating disorder hotline had to shut down their AI therapy chat after just 3 days of use because it was encouraging callers to engage in their disordered eating habits or worse. This year, one company tested their AI therapy chatbot by running a scenario in which a fictional man was asking for advice about his cocaine addiction. The AI told him to buy and snort more cocaine. Some AI chatbots have even encouraged people to commit suicide, and in some of those cases, those people actually did kill themselves. I wouldn’t recommend AI “therapy” to anyone under any circumstances for this reason. There are plenty of human-run hotlines and other mental health resources you could use instead of an unreliable, dangerous machine.
youtube AI Moral Status 2025-07-04T22:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyxb5dCdph9kyHWKBV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxkD8ZG5hyCVG9T-id4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwd1DGst8wCxQLk9KZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy6Tlb7lv76rs3MS9R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypRkPGN2ihQIWABCh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4zFHksPoj0PgSunV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxtrsw7bu1qmVdtu1t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw39F_aanyc9z39rVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJVkpVTyNrGmhLxIp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzlcKPmxLiw2i4HqWV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]