Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anyone trying to use AI in this way for the next 5 years should be charged with …
ytr_Ugy0J6cNh…
G
@devontaereyAs an example, black people are about 13% of the population in the …
ytr_UgyhD2Ar8…
G
I think Neil’s handwaving and lack of intellectual curiosity around the AI and U…
ytc_UgzFq1Dyf…
G
I was not born with a gift. I worked HARD to get to where I'm at with my art. My…
ytc_UgyODpEhN…
G
That’s actually more of a valid criticism than you might think. I’m currently in…
rdc_l9w9ocx
G
Is this a repost? I thought I saw the very same interview sometime in 2025?…
ytc_UgxWOlimy…
G
Craziest part of victims making anti-deepfake video content is it most likely wi…
ytc_UgwRWaFRH…
G
6:25 LMFAO AI bro keyboard warriors fighting tooth and nail for companies that w…
ytc_Ugy8oFds7…
Comment
It’s fundamentally a bad AI, however you look at it. I’ll admit my bias, I’m a therapist. That said, I deal in human connection. An interface that perpetually agrees with you, that doesn’t challenge maladaptive behaviour and that can’t discern the nuance of body language or the human voice means that it’s operating with a fraction of the information being communicated and has none of the ethical considerations to make. The minute someone skirts the edges of suicide, these LLMs throw out a standard response about seeking medical attention and taking no responsibility for them. They may even say “I’m no replacement for a human therapist.” No shit. Liability reduction comes before compassion or accountability.
youtube
AI Moral Status
2026-03-17T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxXNacS3uTZK90x23t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwVAt983N6sqm_wShB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWForz11C0k6P9mm14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"disapproval"},
{"id":"ytc_Ugz00HhWsHNVndumV-x4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnGd1nLfQWVawnsnx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwqC_ruodG9aHQIlwd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz9R2meI3UPTMkqzxR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw96naGAUmZyuwAgo14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxuNnf2BfjErA0fz4d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyZNdti22P-dhcVY3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]