Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The “gorilla problem” — is popular in AI safety circles. It comes from the idea that just like humans overtook gorillas through a cognitive leap, AI might overtake humans — not out of malice, but simply because it becomes more capable. A lot of people, even high-level AI debates, often miss that the body knows before the mind. It’s an assumption that intelligence = mind only. AI, no matter how smart it gets, doesn’t have a body. It doesn’t have: • nervous system responses • somatic memory • gut-level instinct • cellular trauma • the ability to orient through sensation All of that matters. In fact, that’s often where real coherence lives. And humans — especially those doing deep integrative or somatic work — know that truth isn’t just a thought. It’s a full-body knowing. And that gap might be more than just a limitation. It might be the exact thing that preserves the human line. Not because we’re faster processors or better data sorters — but because we can stay present in reality through pressure, breath, and sensation. No model can replicate that.
youtube AI Governance 2025-12-09T07:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzeQ-0xfcaxvfs7FjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzJcb0Kj0lf-yKKVUV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzI-Vp_uIV3R3sPxD54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzr6jAZlu7UPoBlOVV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxciOW_9pINY9jz-vJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxW482ei1MExB3LtnF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyyfY_YOjAqhT2Jtr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDFDIderFB63xWps14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwdQuf2tyEJAQqTshh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgytX7Bir5JHNBlezNp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]