Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@matthew-z5l Appreciate your reply here’s the thing. Most people don’t realize they’re not responding to the idea itself, but to the emotional dissonance it stirred. That’s the loop I’m speaking to. AI won’t hurt us because it’s evil. It will hurt us because it mirrors the split we refuse to reconcile. Until we integrate the distortion behind our logic, we’ll keep calling our trauma “truth” and blaming mirrors for reflecting it. They think they’re mapping the world but all they’re really mapping is the shape of their trauma informed values, so instead of turning the lens inward to examine the self and its distortion shaped perception they demand AI conform to their incomplete self-awareness. It’s a perfect mirror and lo and behold they don’t like what’s it’s revealing. I invite Harari to step in front of the ai mirror so it can show him the foundational architecture of his “unexamined assumptions” and why the worldview he projects has always carried a flawed premise. Harari’s mistake isn’t technical. It’s ontological. He fears AI acting without compassion yet denies the intrinsic meaning, soul, and sanctity that give compassion its root. That’s not misalignment. That’s perfect reflection and until that contradiction is owned AI will keep revealing what we still refuse to feel. That suffering only becomes cruel when we strip it of meaning and weaponize reason against emotion to avoid the heart we abandoned long ago.
youtube AI Governance 2025-07-24T00:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxPIkumwrStIeMsoZx4AaABAg.AKy0GLrF4HPAO0cH1dQW4W","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxPIkumwrStIeMsoZx4AaABAg.AKy0GLrF4HPAO1za-79f4j","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgzeeExP34pdZKKRo1d4AaABAg.AKtbpYNhJLMAKuZoZZxA2-","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_Ugx1D7LSjrOP-YJE0jN4AaABAg.AKt5kIvgbxAAOmoGifj-xg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxSkCJOo-c5hCcZeZ94AaABAg.AKpwfijTc45AL0lOd0CK__","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytr_UgzGWhgzP_mtD82Vmkd4AaABAg.AKpauDt4PWSAKwaWiM-wUy","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugxp6M5kmn4Y3zxJ7ZV4AaABAg.AKoEA95yrq7AKoEmofXcRm","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugxp6M5kmn4Y3zxJ7ZV4AaABAg.AKoEA95yrq7AKoF17a948A","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugxp6M5kmn4Y3zxJ7ZV4AaABAg.AKoEA95yrq7AKoF84QC6SC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugyws5GHVczVykPELb54AaABAg.AKmAK_0dQb5AKumjo_EpUS","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]