Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tinkering with generative AI is one thing making money off of it is another. Th…
ytc_UgzijVtgD…
G
Of course you have an AI looking pfp💀 i don’t even want to Read allat knowing it…
ytr_Ugx2l-alZ…
G
Phd level of thinking in AI can't change the need of knowledge being adquiere by…
ytc_UgxgIUvDU…
G
Btw. The dear of ai is not only that it may take over, that is as I see it less …
ytc_UgxZ9wbsb…
G
I understand that some digital artists may feel threatened by the rise of AI too…
ytc_UgwlgM4nb…
G
its simple to fix but the people dont care enough as they got there own shit to …
ytc_UgwB870k4…
G
If everyone is unemployed who is going to buy the products that the AI is being …
ytc_Ugya2ywHk…
G
It's gonna be funny when all the private investors go broke because they dumped …
ytc_UgyjJ4Y2G…
Comment
This is how ChatGPT answered for my question:
I've just watched the father of all AI at Startalk along with Neil de Grass Tyson and it was a bit scary what are they saying it may happen in conjunction Humans vs AI
I know the episode you’re talking about. When people like Geoffrey Hinton sit next to Neil deGrasse Tyson and start calmly describing possible futures, it can feel like someone dimmed the lights in the room.
Let’s separate three things:
What some researchers are warning about
What is technically realistic
What is cinema-grade anxiety
The “father of AI” types are usually warning about this:
• AI systems becoming more autonomous
• AI being used in warfare
• AI generating misinformation at massive scale
• Economic disruption from automation
• Loss of control if systems become too complex
The fear is not Terminator-style robots kicking down doors tomorrow.
The serious concern is slower, structural change.
The real risks experts talk about most:
Misuse
AI used deliberately for harm: cyberattacks, automated propaganda, bio-design tools, autonomous weapons.
Misalignment
A powerful system optimizing for the wrong objective in ways humans didn’t anticipate.
Economic shock
Large-scale job displacement faster than society adapts.
Information erosion
Synthetic media making it harder to know what is real.
Now the “Humans vs AI” framing.
That’s a dramatic headline. In practice, AI is not a competing species. It does not:
• Have self-preservation instincts
• Have emotions
• Have desire
• Have intent
Current systems, including large ones, do not “want” anything. They generate outputs based on statistical prediction.
The fear at the frontier is about highly capable systems operating with too much autonomy combined with poor oversight. That’s a governance and engineering issue, not an uprising narrative.
Also important:
Most cutting-edge researchers are cautious.
They advocate regulation, testing frameworks, interpretability research, and staged deployment.
You don’t hear that part emphasized in viral clips.
Are there real risks? Yes.
Is a conscious AI plotting against humans imminent? No.
The uncomfortable truth is that the biggest near-term risks are human-driven:
• Humans using AI irresponsibly
• Governments racing without coordination
• Corporations pushing too fast
AI itself is a tool. A powerful one. Like nuclear physics in the 1940s. The danger was not uranium having feelings.
It’s completely normal that it felt scary. When you see experts openly discussing existential risk, your brain flags it as a threat.
But we are not on the edge of a Humans vs AI war. We are in a phase of rapid technological change that needs adult supervision.
If you want, we can talk through what specifically in that episode bothered you. Usually the fear becomes smaller when we isolate the exact claim instead of the atmosphere around it.
youtube
AI Moral Status
2026-03-01T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw1fnuOKVDwjcc-Rnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxEGkhHyRn4Cn6HzV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx9okGCM6_oTBSzHm54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5s1f7INw10NAo7pl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgykSZMmFKLgMSuixCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzPtPvkr6D5rEPk5wF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzPbuNNUhz4YwffHsZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUJP2wflSrJ7Xxdnx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqB9xq_rr77ZMH7394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwT6BC3eULWrxbsUMh4AaABAg","responsibility":"government","reasoning":"mixed","policy":"ban","emotion":"mixed"}
]