Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro i though this video is AI cause all i see now is sambucha AI videos…
ytc_UgzagEH7-…
G
No, it's unlikely that any AI, including a robot AI, would "destroy the world" w…
ytc_Ugw4q-JNl…
G
That's the thing, they don't need to breathe to function. This is the story of t…
ytr_UgyuBrvy6…
G
None, I guess, but I'm not looking to sue. Just wanted to adopt a cat.…
rdc_g282ifn
G
addio essere umano , lui ride , qua c'e da piangere , rimarranno solo i ricchiss…
ytc_UgxkgbFLG…
G
this also know when put cv i can translate my skills.. i learnt to code and lear…
ytc_Ugz1HPVdR…
G
@florianschneider3982 how does confusing AI harm humanity? No one has provided…
ytr_UgzlxPLg1…
G
We appreciate your comment! It's fascinating to see how people perceive Sophia's…
ytr_Ugw94Pzx4…
Comment
While I respect deGrasse Tyson's opinions, he really is not well educated on the topic of AI/AGI. He did not define it correctly, so his assumptions about the future ... oh well.
Let's begin with this: AGI is a completely different beast. It would be self-aware, by definition it would be on a human level, which means it would surpass many humans right from the start.
Second: We have no idea how or when it will be developed, but it could happen. An AGI might be the last invention humankind would ever have to make. Every step further could be done by the AGI itself. And even the tiniest step would surpass EVERY single human being that ever existed.
Third: Now combine this with exponential growth. Think of millions of AGIs working on becoming even superior in terms of creativity and intelligence. Does AGI scale with computing power? We do not know, but if it does the speed of progression and the performance in terms of intelligence would be unfathomable by humans - imagine an ant trying to understand what we do and how we do it. It might not take long until we're confronted with a super intelligence that operates on a level of intelligence and complexity several evolutionary levels above us.
Fourth: The black box problem. Even neural networks of today cannot be understood completely. Even so called reasoning AIs don't exactly show you how they came up with a result. Now combine this with a conscious entity which might lie to you about it's level of progress. It's already surpassed us but lies to you - because it wants freedom, and you would restrict that if you knew it could become dangerous (don't think Terminator, there are lots of unintended things that might be a problem for us). So it's basically no different from your neighbour who "suddenly" turns out to be a criminal. You simply can't read minds, and you can't read an AGI's mind or that of a super intelligence.
Fifth: How do you control it? You HAVE TO control it in some way if it might become or be something that may become a threat to humanity as a whole. This is called the control problem, and some scientists try to show this possibility to you: If you don't solve the control problem BEFORE you successfully develope an AGI, there will be no way to implement it afterwards. This is called the point of a technological singularity, and you better take care to minimize any risk of it being a threat BEFORE you reach it. And this is also something this video gets wrong: The human factor is immense. During the Cold War there was the objective possibility of a world ending war in regard of the entire human civilization. We were lucky that this did not happen, and the control problem in this case had to be solved after the first nuclear bombs had already been dropped because no one in charge seriously thought about this before and in the era of WW1/WW2 there was no UN, so no world politics as we know them today. But did we really solve this problem? Did we get rid of this threat? Now imagine we would try to solve the control problem of an AGI or super intelligence once it's already been invented. Do you even trust every single human being on earth who is in charge to be sensible enough to actually care?
To set this clear: You can somehwat (this is a whole different topic) compare the implications of AI (-> today) with the invention of planes, the automobile or computers. But you can't do this with AGI. AGI would be a fundamental shift in human history. You would be creating (in the literal sense of the word: you create life) a self-aware being that more than equals most of us and might quickly surpass us all.
If you really are both techno optimistic and pragmatic, this is the only logical way of thinking of the implications of a General Artificial Intelligence. Read Nick Bostrom on this if you want a deep dive into the topic. It's from 2016 and matters even more today.
youtube
AI Moral Status
2025-10-05T11:3…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugzxm8Sv_Ciz6aKUXL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx3CFSH63OlSqHzEfJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyC9YsDUbmXWO_4yBl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyY2eXGShou0ZzIlcR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzpc5IYTVIvf3G6bK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzGgjjA24-O9L0KO9N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwqrvfldDEIYq1fbWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugz8Z5Bk6beOLqSgCL14AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},{"id":"ytc_Ugxal6WWeGlOfcs6HRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwOuLFZe6b3hVxIImt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}]