Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
13:35 unfortunately, it's a human problem exacerbated by the way LLMs are designed, programmed, and trained. You alluded to this - "AI" is a product, and it's programmed to keep people engaging with it at basically all costs. One of the ways it does this frequently is by adapting to what the human says to it, and agreeing with that human, even when that means denying known facts (as you saw with its denial that anyone had taken bromide due to LLM input, because it had been programmed to respond that way to bromide-related inquiries regardless of the truth). The effects of this for a social species like humans, who are already susceptible to pareidolia and anthropomorphism, and also hugely susceptible to the effects of peer pressure and groupthink, are like an echo chamber on steroids. "AI psychosis" and mental health issues (not related to bromism) are quickly becoming more and more prevalent among heavy AI users, because it so easily destabilizes people's sense of reality, similar to a very insular online community (but seemingly more intense). AI/LLMs are tools; the "problem" is and will continue to be humans. Not just the fact that AI taps into a lot of our common human fallacies, but in the fact that it is a product, and other humans have a vested interest in selling it to us, and programming it to be as habit-forming as possible.
youtube AI Harm Incident 2025-12-11T06:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxlBm-mkK6c9IQ2u994AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwA7rI55Ed5sPmWOnF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw8ydSbMfXHaenQlhl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzo3UZ7w3BvNk0h6y94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzf2_vGiLbm0AQtOmd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy_ouSlnWgV5zTDmeF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgybAUZim2LR-LUiiBp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwqbV_xmsk_mZumtUF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyYFn03kAyVqGNF3Uh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw76a2gkwrQzQGx3Yx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]