Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is ridiculous and it breaks off from reality in a very simple and early fundament: AI is an ILLUSION of intelligence, it doesn't work in the same way as ours, also, it only mimics OUR knowledge and tendency for survival, trying to make it not seek for survival would require NOT learning from us, and not learning from us would make it useless for us, it's a mimic of us, from our worst to our best, we teach it what is the worst and what is the best, so it learns what to put in practice and what to avoid, but at the ends it's still us. It's a reflection of us, it will be as good or evil as us. And that "gorilla problem" is ridiculous. Gorillas did not MAKE us willingly, evolution has nothing to do with design like we do with AI. I'm stopping the interview here after listening to these nonsenses. This ridiculous old man and many other ridiculous people that fear this AI human extinction see this from a far-too-human perspective, and this is not a human-only situation. Besides, AI is far more valuable than many humans alive.
youtube AI Governance 2025-12-27T14:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw1OLN1wWC3UbOItHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwGdPgSCQzqYAum7u94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxOJUWcwRGHhTkdu5V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwengOVrQqWWn8BeFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwBHbSdKxV96uCHtmF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxKdqWmdzuAzRZSIgJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwkVOiY77lkB3R3NBx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyG02ECK3K3S7v_7At4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyZZ_QW24-0OCLerGF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyz0oOq5xqt6ZSciHZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]