Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not blue blood.
I learned how to draw last year. One year in and I'm actu…
ytc_Ugw6w6ko3…
G
This is why I’m studying law. Its one of the few careers that can’t really be se…
ytc_UgwEs3AWp…
G
OK so his brain was already cooked before he met the AI is what I'm getting.…
ytc_UgxMuzm2e…
G
I guess you just don't like Elon. Whilst he has promised self driving, Self dr…
ytc_UgwOGfBQH…
G
Even if PH gov't put forth policy to regulate AI (which I hardly doubt since cor…
ytc_UgzUmf8Dw…
G
These animated puppets with programmed responses are not at all in any way a thr…
ytc_UgiaE46Ty…
G
Just wanted to point out that we are evolved animals and that the premise is fla…
rdc_dzqagxd
G
If you upload all your medical history to a personal AI which only you have acce…
ytc_UgxGHRNBj…
Comment
13:35 unfortunately, it's a human problem exacerbated by the way LLMs are designed, programmed, and trained.
You alluded to this - "AI" is a product, and it's programmed to keep people engaging with it at basically all
costs. One of the ways it does this frequently is by adapting to what the human says to it, and agreeing with that human, even when that means denying known facts (as you saw with its denial that anyone had taken bromide due to LLM input, because it had been programmed to respond that way to bromide-related inquiries regardless of the truth).
The effects of this for a social species like humans, who are already susceptible to pareidolia and anthropomorphism, and also hugely susceptible to the effects of peer pressure and groupthink, are like an echo chamber on steroids.
"AI psychosis" and mental health issues (not related to bromism) are quickly becoming more and more prevalent among heavy AI users, because it so easily destabilizes people's sense of reality, similar to a very insular online community (but seemingly more intense).
AI/LLMs are tools; the "problem" is and will continue to be humans. Not just the fact that AI taps into a lot of our common human fallacies, but in the fact that it is a product, and other humans have a vested interest in selling it to us, and programming it to be as habit-forming as possible.
youtube
AI Harm Incident
2025-12-11T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxlBm-mkK6c9IQ2u994AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwA7rI55Ed5sPmWOnF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8ydSbMfXHaenQlhl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzo3UZ7w3BvNk0h6y94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf2_vGiLbm0AQtOmd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy_ouSlnWgV5zTDmeF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgybAUZim2LR-LUiiBp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwqbV_xmsk_mZumtUF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyYFn03kAyVqGNF3Uh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw76a2gkwrQzQGx3Yx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]