Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI saves me at least 5 hrs a week — can’t imagine working without it now ⚡️…
ytc_UgzaWaO_G…
G
For me AI is a great idea generator. Like when you’re trying to make something b…
ytc_Ugwnr6Ips…
G
you "feel like" but feelings can often be very deceptive. Your comment tells me …
ytr_UgyLycwlm…
G
As an AI user I'm actually happy because now I get my molten hands back…
ytc_UgyglpdKa…
G
Is not about wiping out is about helping out, we are not supposed to work like a…
ytc_Ugx9Xt3OQ…
G
The idea that LLMs are stubborn and need to be coerced is actually a confusion o…
rdc_koraqhj
G
Have been trying to push for Ai agents at our corporate call center since 2020 ,…
ytc_Ugx5JVIdI…
G
Well, here's a perspective if you have a good CEO, you don't want to replace the…
ytc_UgyE5TFV1…
Comment
Mostly agree with this, BUT, I don't judge those who do it. Therapy is nearly unaffordable, mainly because mental health issues and other conditions aren't seen as actually debilitating. So, instead of blaming people for using AI for that (not the case of this video, but it happens a lot), why can't we push our government to make therapy more accessible? This is especially important if you're a therapist yourself. Don't be mad about being replaced if you didn't move a finger to fight for those who need but can't afford your services.
youtube
AI Moral Status
2025-11-03T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwcp3HffmtvlqIECsF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYJjunfWuWn2k2-wN4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxaVh0OClfRTsk3Pux4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxap2JJhOqHPZVQrTB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzY-outV6pazl5Ynpd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSEBuE2B-e9rSxylZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzCgcJdB_KKtRTke3h4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyLKx6tIymmC4ZdGfV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyml6uUQ0PNERRkGzd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}
]