Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Speaking *as* someone who works as a therapist 1.) I have 0 concern over AI putting me out of a job. My waiting list is literally months out right now. Like...I mark online that I'm accepting new clients, and I have to say to scheduling "I only want 2 new clients a week till I reach X numbers" cuz if I don't it'll be 10+ new clients in a week. 2.) I have a lot of concerns about AI improperly using therapies where improper application can be harmful for clients. cognitive reframing in narrative therapies can be *really* harmful when improperly applied. At the same time though, there are some modalities(like DBT, or Seeking Safety) where a major part of it is *literally* run through a manual, and I can 100% envision higher end AI implementing the more rote modalities, and frankly "no therapy" is probably worse than "low level distress tolerance and mindfulness training through AI designed highly modified DBT", and when there's already studies being done on evidence-based aps to measure their efficacy, I could 100% see "apps that are already proven to work+ai to make those apps feel more personal than a pre-recorded video" having a positive effect. Hell, with lifeline phones being a thing(Obamaphones), and with the knowledge that all the free lifeline phones are 100% of the time, the cheapiest, crappiest phones imaginable, but are also the phones that we can absolutely guarantee will get into the hands of anyone who has someone to help them with the paperwork, I'd be comfortable arguing that AI evidence-based therapy ap that only uses preconstrained modalities(the ones that we already know won't mess someone up if applied via app) but also is designed to work on the super-cheap tracfones that are in the hands of the people who could *most* use someone to talk them through a crisis at 3am would be an extremely good thing.
reddit AI Bias 1682950178.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jif8yck","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jifjpl7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_jifle8n","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jielmd1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jie0udf","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]