Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is eating AI content. The web is not for humans anymore, but for AI agents 😝i…
ytc_UgzHwmpV3…
G
Someone that suffers from psychosis or schizophrenia could well fail the Turing …
ytc_UgiFxDfuc…
G
I think one thing people need to understand about the AI is the fact that the AI…
ytc_Ugy-nDDhB…
G
All AI has shown is that humans rejoice in rehashing and recreating everything t…
ytc_UgxFnOWzt…
G
at this point I don't even care about ai art but the ppl defending/supporting it…
ytc_UgybrdNSL…
G
In a world of A.I., humans become less productive and less profitable. People wi…
ytc_UgxDHtAYD…
G
not defending AI art, just explaining the thought behind it (similar to explaini…
ytc_UgwWY0b58…
G
So 1% gonna be even richer because of AI and 99% gonna be without work. Who wou…
ytc_UgxGUGmWW…
Comment
I think the scariest part is that people are using it for therapy period. AI does NOT know what is healthy human communication. Nor does it know what healthy coping mechanism are. Or how dysfunctional families work. Or how to help people with mental disorders. Or how to help a suicidal person. Or a drug addict. You just can't rely on an AI (an LLM, really) to help people grow into emotionally intelligent adults rather than just reinforcing their anxieties. Like my brother has anger issues. I don't doubt that an AI would just go "yeah your family does suck and deserves you yelling at them, good point." Or an incel that's stewing in self-hatred might have their misogyny reinforced by an AI and even feel encouraged to commit violence against women. There's no regulation. There's a reason you need a license to practice therapy.
youtube
AI Moral Status
2025-10-12T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxqkw1DGqmpo6NLA6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxtPNmLcojuGxqhI7d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgypE8CvuWUU1wwCmih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyxKwlGvzVeq9Vl4_14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugye7IPOTNrfZQqNqQV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugzvq30kpK-f7UnMSop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyfd5mK7EttiQXeotx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz5r7a0gXOGksN_dOR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwYX059dyZNaYQ3Cq54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1llcSXbz9T6ubBeN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}
]