Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here are some of the logical fallacies that Dr. Farquaad himself made: 1. Argument from Personal Incredulity: Claims AI safety is impossible because he (and his Farquaad friends), after 20 years (but sometimes it takes 50+ years - a fallacy within a fallacy) cannot conceive solutions. 2. Equivocation: Labels AI dev "unethical experimentation" by broadly defining "human subjects," bypassing consent. In other words, we’re too dumb to consent. 3. Ad Hominem: insults you and me, again, by calling us "NPCs" in a simulation. He assumes he is right without proving the simulation and that consciousness is emergent, not fundamental; thus dismissing Hoffman/Castrop’s science - because he thinks they are NPCs These people, him included, are technocrat Farquaad vicars. “Just believe us, you're too dumb, we’ll save you! Give us more money, power, and control to protect you from the monster we’re creating, that we also want you to fall in love with. It’s inevitable. If we don’t make it someone else will, it might as well be us.” But who knows, the T800 ended up saving humanity.
youtube AI Governance 2025-09-04T19:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx_bV1jwLAjuNilkOl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxeb0e3BsIISpa6Qr54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"confusion"}, {"id":"ytc_UgwTDdEgXsZ7_fOv1OV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgztFGr4QwQqe2QA7kR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfMH21s_XWjLyY2Sx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwWStSA1qosnBpGQvR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyjkcyiXxGHu13gCAt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwx79llVT16gbB0P6Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxsGLDsfs5jZMktyDh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJXvUbV2lGnUpDP-B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]