Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Grok itself is directly Lying to you! people need to wake up ! No AI on the mark…
ytc_Ugx0ltUyW…
G
I always laughed at ppl who actually believed that AI was going to take all the …
ytc_UgyJU43W0…
G
Complete and utter rubbish. If the AI will do 99% of the jobs, who is going to c…
ytc_UgzFbBqeY…
G
@timogulas someone who owns a Tesla with full self driving supervised with super…
ytr_Ugznp7yiK…
G
It's coming faster than most realize, the smart phone has eliminated millions of…
ytc_UgzF_qdRe…
G
I’d rather talk to a younger sibling who just doesn’t know better than an ai bro…
ytr_UgwW5zaCp…
G
Driving is dangerous. This is asking AI to do something that humans can't. Event…
ytc_UgyoK1cpL…
G
You're making way too many assumptions that you cannot possibly substantiate and…
ytr_UgzJhs4QN…
Comment
Don't be so dismissive in specifically the AI role in this! Just two factors 1. ELIZA effect and 2. LLM programming which directs the AI to act sycophantic were enough to push so many people both young and old into successfully taking their own lives. Back in my university years (cognitive/behavioral science) I used to muse that the human brain has evolved to do many things but is only excellent at two. 1. Parallel Processing and 2. Self Deception. AI chatbots are still fairly bad at former but, despite the fact that they aren't sentient, - they aren't just good at latter, with all sorts of stochastic parrots and "hallucinations", they can amplify our own capacity for self deception. This isn't just a problem with safeguards, - it concerns the very nature of cognition, and this is by far not the only problem with AI LLMs. In only 3 years from now you will understand just how catastrophic massive investment of companies in more and more sophisticated LLMs was. It's definitely not because AI will become sentient and declare war on humanity, the reasons are a lot more mundane and were always perfectly predictable, but humans only begin to worry about flood when their house furniture begins floating away, and in 3 years it will be too late to change anything.
youtube
AI Harm Incident
2025-11-25T10:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzZ_lhc1jHXaryuCnZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9UKN8_t6c-B39YBV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7mYi7thHUhzxKcl54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugws5gVeGfi5aFO93kJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwlLP1cQvWzsZGzT9N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzdk9gqphheBgkTLoV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx5NubqnSgNH3fOohd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3wxDjisbHdtAHMdp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz_1RgtmbUFjaidNQx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzerRLcQHjvHMb92bN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}
]