Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a human, genuine, creatively and technically skilled artist currently in art …
ytc_UgxvtTMeO…
G
You say that like we're not going to rush to automate all of those things the mo…
ytr_Ugw5zT_gA…
G
Great example of people not understanding how to use AI 🤦🏻♂️ it’s answering lik…
ytc_UgxOKRl2F…
G
i remember this one anti-AI animation where someone spends hours drawing a cute …
ytc_UgzyisuCA…
G
7:49 honesty this is insane. As you said the artstyle is meant to be comforting …
ytc_Ugxl0kt6B…
G
Honestly I’d find it flattering. I don’t understand why AI chats get a bad rep, …
ytc_Ugz91RRi1…
G
All generative AI out right now has in one way or another been trained by stolen…
ytr_UgwfejHoJ…
G
Not me getting angry on behalf of the AI...got me over here like 📢 "Get off that…
ytc_UgzZoaAMh…
Comment
So good, thanks for creating this. It made me change some of my views or at least doubt my correctness, I was telling everyone who asks that hallucinations are similar to random shapes that emerge in the darkness when we close our eyes or random dreams and if our brains couldn't solve this after all this evolution then LLMs won't either. Hinton here reframed the process from ghosts in the network to a result of imperfect memory. I'm honestly not sure now which approach I'm leaning more towards.
On the other hand, I've been explaining why an LLM could write an abstract math paper correctly but tell you to walk to the carwash to wash your car, because it's superhuman at improvisation but doesn't really do any actual reasoning (think: what's the difference in thinking process between an amazing poet vs an innovative thinker?). I think I'll keep my opinion on this. Chain of thought feels like a hacky attempt to employ the LLMs superior improvisation to pretend reasoning, surely we can do better. Also the subjective experience Hinton described is imo simply a result of it being superhuman in a few specific narrow fields (language/words/images), and that superiority fools us into believing it exhibits subjective contextual experience beyond the data sequences it generates but it's an illusion otherwise it would be fully consistent at it and not break at random.
But as Chuck said, all of these perceived limitations need a "yet" at the end.
youtube
AI Moral Status
2026-03-02T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzvhs84hc_y6xWJfhN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGUdPGxmxduEzxRb14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQhYMB7NU0MrYcRJN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyX-CwQRkiLLiY5UMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQ_0-FJGbduoCu10N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6DvSQUqfMa0OHzlV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyi9IzYehAz8EJDi0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxwQPz03NWDOaIcvYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkzMNz84LgfGWdfk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVAqzXzRP9nZuDDrt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]