Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Cool, so, another take: If you can make in person, human therapy cheaper, dope. And if you can't, the above video is literally useless advice. Like anything technological that is new, people freak out and panic because usually the ethics come after the technology, if history is any guide. There are pros and cons. But people need help, AI is there, and for SO MUCH CHEAPER. I'm pretty confident in saying that most people who are talking to an AI for therapy are out of other options and don't give a flying fuck about confidentiality; they just want something with skills, empathy, resources, and non judgment, to help them through their crisis. I think AI therapy is fantastic. And sure I would prefer to keep things private, but if me sharing my shit with AI is free, if it helps other people and trains further AI models, if there is no wait list, then awesome. Even better if it helps me deal with my crisis in an effective fashion, and does more for me in two hours than human therapists have done in 6 months.
youtube AI Moral Status 2024-09-06T00:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwhjeG57kUudBntnZB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwy6ZntMJBnEKE23Rl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzOz4uyj19zmTcU9NZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKA6TNn3AZbEwEywV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_Ugz2uXJ3fW-qJm5eMWh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbJCw7R4nHthapcCx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9A8tz0yZkZRCojM54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXRr4649MSzUvpPEF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy5Kmor5SzRr7klyDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgywPmMu4as6yDzTRlN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]