Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Problems with AI "therapy": 1) HIPAA conformity problems (what was discussed in the short) 2) lack of boundaries. AI "therapist" might mimic inappropriate levels of responsibility for your challenges or successes, or blur the roles it plays, responding to prompts asking for affection or promises of love that a real licensed therapist would know as danger signs. romantic relationships between therapists and their clients are unethical for good reasons, here especially relating to the health and safety of the clients 3) AI "therapists" are not therapists. They simply do not have real qualifications to provide support to people in critical mental health or crisis situations. Is it possible to interact with one of these language models and not suffer some horrible tragedy? Yes, but that does not make it beneficial, necessary, or a responsible idea. It is possible for a person in psychosis to reinforce their delusions by chatting with AI. It's possible for a person with intentions to hurt themselves to become convinced that it's a good idea to move forward with their plans, because these models excel mostly at mirroring and reinforcing what a user is suggesting to them rather than challenging or critically analyzing what they are told (because they are not people, and are not thinking about what they are being told at all) 4) they are unaccountable. A real therapist is accountable to a licensing board if they do something wrong and can make calls on your behalf to crisis lines or emergency services if they learn you are in some kind of danger. The AI models have no one they can call and no one to report to. If they convince you to jump from a ledge, they don't get investigated for wrongful death, they don't lose a license that they never had in the first place, and the people advocating this technology sweep it under the rug or never hear about it in the first place. On that note; 5) this tech absolutely has killed people. We don't know exact numbers yet because it's difficult to attribute the exact extent of damage done thru these interactions or to determine who was seriously using the tech as a replacement for therapy and who was just roleplaying as a psych patient, but at least a few people actually died in a way that directly implicates "AI Psychotherapy" Do Not Use AI As Therapy
youtube AI Moral Status 2025-08-18T00:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwA81NRX61oadX3k1x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyLcAqB1ypKW-OvAd54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz-91L8r-yaSUQYx614AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxSYCT3VfxUbXAVzqF4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugxs_sj9BVPSISP0nrl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugyg30nns89cMBurJ1h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugyymxt0R8g10ij8zn54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzPHKyq2KXAOWrzeMh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxfeusJZZvUNz1L-7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugzs9kPYUyTLCtBQTcx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]