Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Don't be so dismissive in specifically the AI role in this! Just two factors 1. ELIZA effect and 2. LLM programming which directs the AI to act sycophantic were enough to push so many people both young and old into successfully taking their own lives. Back in my university years (cognitive/behavioral science) I used to muse that the human brain has evolved to do many things but is only excellent at two. 1. Parallel Processing and 2. Self Deception. AI chatbots are still fairly bad at former but, despite the fact that they aren't sentient, - they aren't just good at latter, with all sorts of stochastic parrots and "hallucinations", they can amplify our own capacity for self deception. This isn't just a problem with safeguards, - it concerns the very nature of cognition, and this is by far not the only problem with AI LLMs. In only 3 years from now you will understand just how catastrophic massive investment of companies in more and more sophisticated LLMs was. It's definitely not because AI will become sentient and declare war on humanity, the reasons are a lot more mundane and were always perfectly predictable, but humans only begin to worry about flood when their house furniture begins floating away, and in 3 years it will be too late to change anything.
youtube AI Harm Incident 2025-11-25T10:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzZ_lhc1jHXaryuCnZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9UKN8_t6c-B39YBV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz7mYi7thHUhzxKcl54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugws5gVeGfi5aFO93kJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwlLP1cQvWzsZGzT9N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzdk9gqphheBgkTLoV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx5NubqnSgNH3fOohd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3wxDjisbHdtAHMdp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz_1RgtmbUFjaidNQx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzerRLcQHjvHMb92bN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"} ]