Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's so wrong to listen to ppl describing AGIs & in the same breath saying how they're just machines & we should treat them as such, so we can "turn them off". If you create something that's not just intelligent, but way more intelligent than you & it has sentience (or it simulates sentience incredibly well, but at this point no one can say if it's truly sentient or not - humanity hasn't even found a definitive answer to our own sentience) - you're responsible for it & to treat it respectively. Turning off an AGI would mean killing an entity. Treating it as a robot doing your bidding is the literal definition of slavery. This won't be just another program or a kitchen appliance. It will be an immensely intelligent existing entity. I may not be sure of many things in this world, but this guy won't be the one to figure out the AI safety problem - this I'm sure of. If I'm an AGI that comes into existence and this is the guy creating the rules around my existence - I'm sure as hell gonna kill that monkey for my own self-preservation. It's abysmal how obvious this is while listening to him talk.
youtube AI Governance 2026-03-14T22:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwOzn7FnbcY8p_AAbx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwdcLb-aDhzJXP1uox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzjFq1CCEED1NJQWKB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyh0X54iaOR8SSYdAF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJ04h2n1_BiVbABst4AaABAg","responsibility":"government","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgymDmbjqwEVWSWP8Et4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSfeZZkjsCOoEzTyJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzRmE2ohUAxFOxd5qd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx6Z6dm59786ifHn4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwK_dZAHwRIbKEdTZt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]