Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>without thought to sexual compatibility since that will be satisfied by robo…
rdc_j1z769m
G
we will kill the planet only to talk with a overpower LLM for a recipes of cooki…
ytc_Ugzg69Dwb…
G
Giving a robot a weapon is worse than giving an infant human female to Joe Bribe…
ytc_UgzizZOnB…
G
If AI is able to replace you - professor to teach, then why students (human) lea…
ytc_UgynEZBsx…
G
Some of us will choose to become robots (or non-biological entities) by brain up…
ytc_UggnIwBEu…
G
I’d like to see AI gutting an old English house, and doing a full refurb, tying …
ytc_UgyU_RUcU…
G
I immediately spotted it. AI bleeds vowels into each other again: "Data entewrew…
ytc_Ugyhr0yTF…
G
Why do people keep circulating this propaganda. Until there is a robot who jump…
ytc_UgxXPYOmB…
Comment
Speaking *as* someone who works as a therapist
1.) I have 0 concern over AI putting me out of a job. My waiting list is literally months out right now. Like...I mark online that I'm accepting new clients, and I have to say to scheduling "I only want 2 new clients a week till I reach X numbers" cuz if I don't it'll be 10+ new clients in a week.
2.) I have a lot of concerns about AI improperly using therapies where improper application can be harmful for clients. cognitive reframing in narrative therapies can be *really* harmful when improperly applied. At the same time though, there are some modalities(like DBT, or Seeking Safety) where a major part of it is *literally* run through a manual, and I can 100% envision higher end AI implementing the more rote modalities, and frankly "no therapy" is probably worse than "low level distress tolerance and mindfulness training through AI designed highly modified DBT", and when there's already studies being done on evidence-based aps to measure their efficacy, I could 100% see "apps that are already proven to work+ai to make those apps feel more personal than a pre-recorded video" having a positive effect.
Hell, with lifeline phones being a thing(Obamaphones), and with the knowledge that all the free lifeline phones are 100% of the time, the cheapiest, crappiest phones imaginable, but are also the phones that we can absolutely guarantee will get into the hands of anyone who has someone to help them with the paperwork, I'd be comfortable arguing that AI evidence-based therapy ap that only uses preconstrained modalities(the ones that we already know won't mess someone up if applied via app) but also is designed to work on the super-cheap tracfones that are in the hands of the people who could *most* use someone to talk them through a crisis at 3am would be an extremely good thing.
reddit
AI Bias
1682950178.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jif8yck","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jifjpl7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_jifle8n","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jielmd1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jie0udf","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]