Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need in the Constitution to ban AI from being used on citizens and to ban spy…
ytc_UgyKskQJQ…
G
@ people seeing it isnt the point, its that ai uses and steals art from unconsen…
ytr_UgzOXA3sm…
G
Robot"I'm alive"😂 I don't fear these there's just something about machines that…
ytc_Ugzes7gr4…
G
This is awesomene! Maybe you should try with the new plugins that OpenAI just an…
rdc_jdmdk8r
G
Weren't these companions chatbots partially being sold as a "solution" to the ma…
rdc_nnjiujl
G
AI art isn't even art
Like you might as well Google some art and say you made it…
ytc_UgwqyIwOI…
G
We understand that interacting with AI can sometimes feel eerie or unusual. If y…
ytr_UgxVYKenn…
G
did anyone see in the first vid the robot girl looking at her? that was so crepp…
ytc_UgymYIkg7…
Comment
Gödel’s incompleteness theorems are about the limits of provability within formal mathematical systems — specifically systems capable of expressing arithmetic. They tell us that some true statements can’t be proven within those systems. That’s it. They say nothing about understanding, consciousness, or whether a machine can "know what it’s doing." Using them to argue that AI can’t be truly intelligent or conscious is simply a category error.
Some commenters claim the interviewer here “doesn’t understand Penrose,” but that’s precisely why the interview works — it depends on the interviewer not challenging him. If the interviewer truly understood the mathematics, the whole line of reasoning might fall apart under scrutiny. As it stands, the lack of pushback lets these arguments drift into a haze of vague mysticism.
It’s also surprising to see Gödel’s work invoked by Mr Penrose without any attempt to make it accessible to the audience. You could describe Gödel’s theorem like this: if you're trying to determine the smartest person in a group, you might need someone even more intelligent outside that group to judge who's the winner — and even then, you're stuck if that judge is also part of a system (was the judge the smartest of his group ?).
Or like this: if you’re building with LEGO bricks, no matter how many you have, there will always be some shape you can’t build — unless you introduce a new kind of brick from outside. That’s the essence: no formal system can ever be fully complete using only its own tools.
That’s not an especially complex idea.
I think Mr Penrose's main argument is the fact that Gödel used a computational encoding (Gödel numbering) to prove his theorem, effectively turning mathematical statements into numbers and manipulations of numbers — something that resembles how computers process information. Penrose takes this encoding and concludes that because there are truths a formal (computational) system can’t prove, there must be something non-computational about human understanding.
But this is a huge leap. Just because the theorem is demonstrated using a computational model doesn’t mean the limitation applies to all computational systems — or that minds must operate outside computation. Gödel’s theorem shows that no single formal system is complete, not that computation itself is fundamentally insufficient for intelligence or understanding. That’s a philosophical position, not a mathematical conclusion.
youtube
AI Moral Status
2025-05-02T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxz50LXjYTtB1FzpyZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzilpeyGJ78d7mTMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyDn86fCh7oryqlsx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVPegwHESEt4i9hx14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMNkU-_jZtgNKppcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSf5rLer9dX9G4YO14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz0T_4sosBJgFxCh5p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-s8TE2fWq4rrFfAZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxe5UvkRnIeMlqiD654AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyX4-QD8KoeKmVxltZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}]