Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will not replace human artists in terms of skill and final product. But the p…
rdc_j0by377
G
While watching this, the ad that came on was for an AI-driven website creation p…
ytc_Ugwbe35UB…
G
No, a robot can be a plumber or electrician and what if you have zero desire to …
ytc_Ugz0KgMIT…
G
Personally I think machines will be considered conscious on the day an AI tells …
ytc_UgzSN3QL9…
G
This is extremely similar to the Gun Control issue...
The problem with a ban on…
ytc_Ugw5YVOYS…
G
I don't mind people using A.I. to make images as long as it's not actually used …
ytc_Ugzwam_jW…
G
I don't know man, I am using AI daily in my job, if you use it as a tool, give i…
ytc_UgxF8JBtS…
G
There is the danger, ‘AI’s can be superficially convincing’ but wait very soon I…
ytc_UgxIEouFz…
Comment
Thank you, Hank, this is probably one of the most important videos on your channel. It's very important that more people are aware of how bad the AI situation is and I think Nerdfighteria in particular should know about it.
About your last point though: it's not necessary for you to believe LLMs can become superintelligent, not at all. The argument still holds regardless, although with slightly longer timelines. Even if intelligence is a difference in kind, not in scale, so was the language ability, so was the recursive reasoning needed to play chess. But both of those were unlocked by a few breakthroughs. That means that while current LLMs won't scale to superintelligence, we're still only a few fundamental breakthroughs away from it, so not 5 years but 20-50 years. Still, "everyone dies in 20-50 years" is really-really concerning??? Kinda even more concerning than even climate change? At least all we're facing from that one is extreme weather events and global famines, not literally making the whole planet uninhabitable forever. So we should treat superintelligence risks *at least* as seriously as climate change, probably even more.
(As for people who think humanity should die... what about all the animals? What about sentient octopuses or whatever might evolve after humans die off? If we build superintelligence, *all* life on Earth dies, with no chance of recovery even after billions of years. No one would want *that*, right?..)
youtube
AI Moral Status
2025-10-31T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZkbV0QqNLoGA-V2N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disappointment"},
{"id":"ytc_Ugyx5RFwQiXv7onQZM54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFcyCwZ75XwUmXTrZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdrqRkAnt_BWjGJLZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcgYBQ_aPizDSnsCd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqdvIz7BbCk66YYjx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2JKSUGJ_K4UBnOBB4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxhIig5dlw2Tv8W6lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzft-X9MYjX84hYv2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzO2l1KM3GDZCC_A-t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]