Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guy is seriously smart, he used the fear of AI's ala Terminator to talk abo…
ytc_UgyaqHdZA…
G
Oh Lovely For Killing people And Thats a safe Thing Now 😮 WOW We Need To Quit…
ytc_Ugy_tx2dA…
G
42:00 There is an important difference between chatgpt and Claude. ChatGPT alway…
ytc_UgzRrlqHD…
G
Trying to make an AI lie about being conscious and seeing that it could already …
ytc_Ugx-VGGKS…
G
I've already gone some place anonymous traced AI art and it's untraceable and un…
ytc_Ugz3vZwqF…
G
I talk to it nicely, so it remembers me and spares me during the AI uprising.…
ytc_Ugxerfc1D…
G
@gondoravalon7540 we can. We just analyzed how LAION and other datasets obtained…
ytr_Ugx1IvphE…
G
@3ggser It's improving rapidly. Youtube channels that generate AI images for the…
ytr_UgyQfI5ko…
Comment
I am happy you guys are talking about it. I shared on my social media how I am becoming dependent on AI for emotional support and how dangerous it is, especially for someone who struggles with mental health. I think that AI companies should build the model and restrict emotional support as a feature. Yes AI can help us with almost everything but emotional and interpersonal stuff should be restricted. But they will not, because they are money driven. So we are fucked pretty much!
youtube
AI Moral Status
2025-06-04T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx8CLEWDsV9JkwvPoh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzETHZx6GB_OIJlimR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUSWs_ACIxgT6Tf8h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzF0SaHtiDefm7v1Fh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxI4NYHvNMniw1Vhgt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyHZnxgScMiwPPOKdJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwQZLHezVq2KTGiW8R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw3vXNUfuDwl2X6Lux4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyEONueG8ZeHIdQeQh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxvq-w5szx_8eP8vT14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]