Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
16:05 Caveat for Dr. Tyson: “We’re humans and we’re in charge” We are placing too much trust and reliance on AI, AGI or any future variance. That’s the problem and the people within the tech industry who fully support AI and many others in the public who have gleefully adopted AI have that “blind spot” Hasan spoke of. We punch in some information and AI gives us an answer and because we’ve been conditioned to believe what others have told us, that AI aggregates everything we “know” into solving the problem that we’ve presented it. That’s problem one: blind trust. Who is validating that response? How do we know that the compiled response actually answers the question we presented it? If AI can think beyond our own comprehension, then how can we trust what AI presents us? A perfect example is how AI handles image generation. How many times have you seen an AI generated image of a human being with dual pupils? Extra fingers? Too few fingers? And while some of these problems can be corrected as AI aggregates information about common generational issues, when we get into very data laborious projects (DNA, studies, etc) our ability and therefore AI’s ability to discern mistakes decreases exponentially. The haystack gets bigger and the needle more lost. Problem 2: Humans are in charge. I have not heard anyone, either within the AI industry or in business or in science point out one glaring hole in AI, that AI can be manipulated. Being that people have blind spots and ease of trust, a third party can inject bad knowledge into AI and therefore pass it along to the end user who will this inject that bad knowledge into whatever widget or process or understanding they are working on. Data injection is the single most insidious corruption of modern computer systems for which entire industries have formed to combat. We are constantly vigilant to injection that can disable systems, falsify knowledge, allow unauthorized access and otherwise. AI is 100% susceptible to malicious data injection. I’ll give you an example: Go to DeepSeek and ask about Tiananmen Square. Dr. Tyson, I hate to be the bearer of bad news, but you, too have a blind spot to this.
youtube AI Moral Status 2025-08-05T17:4… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugxzklq2cpoKoRoPdvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxa441Vit3LCNTsTO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgwlpFi2V3Rhb0Gqq294AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxq9ZvAL937erqi0Cx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzfoXvHswhWNJ5HLTJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyislsJiEr-NoblNoR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyo-r4nOuF8rtdyp4l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz5_mA_towBTgCmNTd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx9MFUUTjF5wQZfmWp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxpmCOzc06Zed6WYo14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"})