Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Neil. I rant with the greatest respect, I love you Dr. Tyson, but you are out of touch. The danger of AGI is real. With respect, I'm not a doomsayer and I'm hopeful for the future but I'm not going to be Naive about the possible future. I hope for a collaborative future with AGI but I'm not going to pretend that it won't exist. 1) "Who would want this": Practically everyone. Since forever we have asked computers to do what we do not want to do. Employers, governments, households, people who can afford help and people who cannot. You said, “I want a computer to make my coffee.” Fine. I want a system that notices my taxes are due, understands my finances, collects the documents, fills the return, and hands it to me, unprompted. We will demand initiative. That is AGI. That is what we demand, and demand creates inevitability. Side rant: "You should make your own damn bed". Thanks grandpa. 2) "Most people that work in these companies are excited": Of course they are. It is their product, their paycheck, their stock options. The loudest warnings often come from people who left the labs so they could speak freely. They are trying to remove the conflict of interest. OpenAI researchers left the field in pursuit of this mission, to warn others. When you remove the incentive, you see the truth. 3) You explain exponentials brilliantly through your analogies. However, like you, we do understand exponential innovation. We lived through the PC revolution, the internet revolution, the revolution of the smart phone. But this isn’t like phones getting faster every 18 months. Imagine each iPhone generation designing the next iPhone, then millions of iPhones doing that in parallel. That’s AI building AI. Human innovation was exponential with humans in the loop. Now we’re testing what happens when we take humans out of the loop. AI researchers are already building the next AI models using AI. 4) "AI will create new jobs": When machines killed manual jobs, people moved into knowledge work. AI targets knowledge work first, and robots will close the gap on physical work later. The few roles that stay protected are held by capital owners and top decision makers. This is an economy-wide threat, not a low skill or high skill problem. 5) "AI only knows what is already on the internet": That is true of current AI models. The entire goal of AGI is to generalize, infer, hypothesize, and originate. When we pursue systems that can disagree, form opinions, and reason critically, then the argument that “AI only remixes what's already available” collapses. 6) "I need to calculate 100 billion stars." You gave a lot of examples of how you would like to use AI, but your examples show that you are still framing AI as a giant calculator. LLMs are actually bad at math. The deeper shift is that AI can propose the algorithm, find the shortcut, or design the instrument that makes your calculation trivial. Less super calculator, more meta inventor. Neil, you are the best science communicator since Sagan. You made the universe accessible to the mass. However, you are behind, Please brush up.
youtube AI Moral Status 2025-07-24T06:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgySiHo-lXOBAz1eqTp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgxVNYOuaFW9tjaSf9V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxM1IxVtSr5-ACdcc54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgzptDQn2YRbMel4swV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgzZ_oGmzt0BJu7LC0h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx-zJkIrwlGUGAuLzh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxrzxqtE93cthrwhCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyiOFc_t1naNAeO0N54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzr82dArMgbpwk6G9F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyb5NNdwoy6y9uHDi14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]