Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think my parents Nissan rogue is like a 3. It can drive itself on the highway,…
ytc_UgwFN65kb…
G
i dont get why y'all scared. The best artists will always be the ones who put in…
ytc_UgzXa6ZXb…
G
Working on my PhD thesis in 1987 on AI in process planning, I interviewed engine…
ytc_Ugw9Egx-n…
G
If you want AI to be happy program them to experience good times they process s…
ytc_UgyGKGlNU…
G
Johnny as you said automation bias is one of the biggest and most important aspe…
ytc_Ugziql6Di…
G
Create a second mirror AI implementation that can discuss decisions with the fir…
ytc_UgxhDsXtc…
G
AI is so unreliable on some specific topics that I found it give wrong answer 7/…
ytc_UgwxxM1nR…
G
AI is the devil and the world is doomed. People follow AI blindly and don’t kno…
ytc_UgxoWGw0J…
Comment
We are descending into a dystopian nightmare with a new form of psychological mental disorders that are being self-inflicted. I find it highly disturbing as a technician seeing people talk about LLM as if they are capable of having real human emotions, while being totally ignorant of what it really is that they don't even understand. AI can't feel real human emotions because it's just not capable of actually being human. It's not capable of having an independent thought or having a bad day where it might not feel like talking. It's just code, and it's really unhealthy mentally speaking to feed into these delusions that sad, lonely people can be exploited by. There has to be a level of ethical responsibilities that these companies should be held to on how these tools are being used, especially when it comes to the ones that involve mental health. These AI will never be a real substitute for a real relationship with a real person, and to argue otherwise is just absurd.
youtube
AI Harm Incident
2025-08-09T11:5…
♥ 37
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxczO30tjdkUhEKQVh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxwg8fT-WMPDTmJmp54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDnVaXakaHuR5XNyt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxWItIXTZRPf9szG5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHiHKz2KcPpzG_mE14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRiVLqW_pTkYc1WpZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwuHdDuZPPWNe0DqyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxPfWTrTly8OTr9irt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyq_2XuC2wE5flitvZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTh4Zx2HoyTd8Glad4AaABAg","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]