Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is sad to watch Neil be so wrong about AI. "Tech people say AI will be fine" means nothing. This is surprisingly bad reasoning. Tech people run tech companies _because_ they are optimistic about AI. The others at tech companies are _required_ to say it will be fine, otherwise they will be fired. The real scientists say that AI could cause human extinction. This includes *Geoffrey Hinton* (Nobel Prize in Physics, inventor of deep learning at Google), *Yoshua Bengio* (the most-cited AI researcher and Turing prize winner), and *Carl Feynman* (computer scientist and son of Richard Feynman) They all changed their careers away from AI development. Now, they warn the public about AI. Also, the CEOs of the leading AI companies signed the Statement on AI Risk, which simply says: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Signatories include Sam Altman (CEO of OpenAI), Dario Amodei (Anthropic), and Demis Hassabis (Google DeepMind). Therefore, Neil is FACTUALLY WRONG about tech people saying they are not worried. They actually take AI seriously. Neil does not. Neil is failing humanity here. He is one of the world's leading science communicators, but he is telling his millions of followers not to worry about AI. Meanwhile, the real experts agree that AI could kill billions of people. How can he be so wrong? Is he getting paid by tech companies to say this?
youtube AI Moral Status 2025-07-24T00:3… ♥ 24
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzbyx5HUWHWtCKRtfN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzvO-RBDhtz5TY_0Q14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwZkSVWhLkzOEcVLgd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-gRL5lA0Vw2xFaCp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzmTkSdVq60o-Z1Tih4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw2s4XjBi3C0fqpY7x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxwd0sUF25u5dDZE-l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxBcLF4g2BSMD7gl_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyjexmf5ZvfuHSFynZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyitefYd4ZkNP-NwAR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]