Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The second part if anything AI created is the good part, giving it substance, AI…
ytc_Ugxo0v2-O…
G
hai
i just wanted to express how much important how your channel has been for me…
ytc_UgyRUvDxJ…
G
Thank you for your feedback! We are constantly working on improving the facial e…
ytr_UgwCD09ab…
G
One of the skill that ai can't replace is creative web development in frontend. …
ytc_Ugw13Zq_b…
G
@roxsy470 In addition to that, AI "art" is literally the same thing as having Ch…
ytr_UgzlcJ2F6…
G
People please read this....this is a DEMONIC attack on your soul and this is not…
ytc_UgyLTaZrE…
G
Watching this video is inspiring but also a bit daunting ‚ most of us can't affo…
ytc_Ugw9xL_zb…
G
I've had a thought and I am curious what people think of it. Once super intellig…
ytc_Ugxxmr4pF…
Comment
Cool, so, another take:
If you can make in person, human therapy cheaper, dope.
And if you can't, the above video is literally useless advice.
Like anything technological that is new, people freak out and panic because usually the ethics come after the technology, if history is any guide. There are pros and cons.
But people need help, AI is there, and for SO MUCH CHEAPER.
I'm pretty confident in saying that most people who are talking to an AI for therapy are out of other options and don't give a flying fuck about confidentiality; they just want something with skills, empathy, resources, and non judgment, to help them through their crisis.
I think AI therapy is fantastic. And sure I would prefer to keep things private, but if me sharing my shit with AI is free, if it helps other people and trains further AI models, if there is no wait list, then awesome.
Even better if it helps me deal with my crisis in an effective fashion, and does more for me in two hours than human therapists have done in 6 months.
youtube
AI Moral Status
2024-09-06T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwhjeG57kUudBntnZB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwy6ZntMJBnEKE23Rl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzOz4uyj19zmTcU9NZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKA6TNn3AZbEwEywV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_Ugz2uXJ3fW-qJm5eMWh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbJCw7R4nHthapcCx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9A8tz0yZkZRCojM54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXRr4649MSzUvpPEF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy5Kmor5SzRr7klyDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgywPmMu4as6yDzTRlN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]