Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai art going too far? If anything it seems to me it's just people going too far …
ytc_Ugyb1cGXd…
G
A painters brain also adds their own unique life experiences, mood, errors, impe…
ytr_Ugw5AsvJw…
G
I wut never buy EV , automatic or even this its lazy id rather do it my self
Ki…
ytc_UgzqzqZCu…
G
Yeah! How 'bout an allusion? I bet dat robot turns it off itself before it's own…
ytc_UgxUNXN3s…
G
AI and electricity are not comparable inventions. Electricity is a necessity for…
ytc_UgxD05LjQ…
G
so you implement an expensive algorithm to detect one specific pathology among 1…
ytr_UgzokWYo5…
G
So the fact that minority report is becoming reality is FAR greater a threat tha…
ytc_UgwVm3_cH…
G
Robots have skills , Robots have durability.
Humans Have not used their full cap…
ytc_UgwXGgQbx…
Comment
Hank, I respect you very much. I've watched your videos since I was a teen and your ability to communicate is a gift to the scientific community. However, I have some critical feedback to deliver.
Just like you say around 16:15, it would be healthy for you to recognize that you are out of your depth and this is beyond your knowledge.
The key shift I would encourage you to make is to stop viewing AI in anthropomorphized terms, and instead view it in terms of computation. AI doesn't have agency, intent, or desire. It doesn't think in words and it doesn't "know" anything in the same way that a database doesn't "know" anything. All of these qualities suggest a sense of self that empirically does not exist. Instead, AI should be thought of as a new form of computation. Specifically, it is a computational graph that is encoded in a high dimensional field. From this lens, intent becomes optimization, thought becomes computation, memory becomes a multilayer perceptron, and hallucination becomes parameters fitting to the wrong curve.
Viewed as a scientific discovery, AI is clearly one of the most exciting and important fields being developed. The math matches a degree of elegance similar to all great discoveries, suggesting we are learning a fundamental truth about our universe. In fact, researchers are making connections between neural nets and quantum fields, philosophers are drawing parallels to how our own minds function and understanding what makes us uniquely human, and mechanistic interpretability is pushing forward the mathematics that helps us to understand the high dimensional fields which govern our world, and seemingly, our intelligence.
There is no doubt that there are risks on par with the Manhattan project. But just as nuclear power gave us both the bomb and the dream of unlimited energy, AI will both destroy and enlighten us. But to properly navigate these risks, I implore you speak to researchers and entrepenuers like Andrej Karpathy, Demis Hassabis, Ilya Sutskever, or Michael Truell who are actively building this technology. Andrej in particular is insightful, and in his opinion, AGI is still a decade away (see his interview with Dwarkesh).
Soares/Yudkowsky are pundits looking from the outside in. Their opening claim that "there is a 100% chance of AI destruction" is anti-rational and politically motivated. An intellectually honest statement would have been: "there is a non-zero chance of extinction from AI, here are the key risks we see, and here is how we might mitigate them." Instead they produced a lazy argument that doesn't come close to supporting their claim and instills fear. Their only concrete recommendation is to limit the size of GPU clusters to 8... This is absurd, think of how many applications would be affected by that and how draconian such a regulation would be. Yudkowsky is smart enough to know he is being intellectually dishonest and it disgusts me.
So, watch 3blue1brown's series on the math, read Michael Nielsen's book, take Andrej's course, and look at Anthropic's research on mechanistic interpretability. Then, you will have the base of knowledge to communicate with the nuance and clarity that you consistently provide.
youtube
AI Moral Status
2025-11-08T20:3…
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDmo18c2vvdm1yQ7h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"respect"},
{"id":"ytc_UgwGAlQGZLoSE-kNHEN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0pkUTj6ztRmqe7uZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzwMdCLcVnMTJGqkut4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKSjjPDLtSP49LfhR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5TSj3WYtiZAakzZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-pyFjAE_0WygVeJx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxMQUrhvcX5Pv4ODC14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwDtF7IlUnNsyMGMSJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLyuIC0e67JM9LqrJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]