Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You should definitely make a video on assignments because there is lot confusion…
ytc_Ugwo3vurM…
G
Your quote in the description tells me everything about your ideology that I nee…
ytc_UgzJZpP7c…
G
I've had lot's of conversations with ChatGPT that it flagged as suicidal. I told…
ytc_Ugy75IFPv…
G
Well that explains why Chatgpt spouts liberal talking points unless you call it …
ytc_UgzEIfiRG…
G
Jack Ma is nothing special he basically just took what china was already doing a…
ytc_UgwsDyzF2…
G
AI steals human made art or real world environments and just puts it all togethe…
ytc_UgytgLVfn…
G
If I understood what is happening in that video, none of it has to be AI to work…
ytr_UgyLUQgkd…
G
But We could replace CEO with AI, Who will buy all the products, if the people d…
ytc_UgyUh4_D_…
Comment
Penrose should have given examples that illustrate Gödel's incompleteness principle. Just stating it in a theoretical way isn't as helpful as it could be. Profound principles and their contradictions and limitations can be illustrated by metaphor. "All the hair in Seville is cut by the barber of Seville; but who cuts the hair of the barber of Seville?" It's both an unproveably true and an incomplete statement at the same time.
He missed this. Computers can crunch data on massive scales and apply algorithms. But it doesn't have context it cannot exercise judgement about the in abstract thought according to Penrose but it's not clear he is up with the latest developments. They should have workshopped the questions before the interview. Failure to do that makes a very boring interview. It's pretty conceited to imagine you're going to have a productive interview talking about matters you're almost utterly ignorant of.
Gödel's incompleteness theorems, proven by Kurt Gödel, demonstrate inherent limitations in formal systems used to express arithmetic. Essentially, they show that any sufficiently powerful formal system (one that can express basic arithmetic) will either be incomplete (meaning there are true statements it can't prove) or inconsistent (meaning it can prove both a statement and its negation).
The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.
The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.
However neither interviewer or interviewee (hard to know who is who here anyway) has addressed contrary positions: in particular that the Godfather of AI Geoffrey Hinton, says AI is already conscious. Why does he say that? Surely the interviewer should have investigated this as well so he could have meaningfully challenged Penrose's statement? Sloppy and lazy from This Channel which probably did a cut and paste anyway and hasn't anything to do with the interview techniques. But I digress.
Hinton says we are allowing AI to become our agents and enable it to make decisions to be more efficient. Then AI has worked out that to make more effective decisions it needs more autonomy and it will do things to get that. So humans can rapidly become sidelined as AI advances the ill defined and potentially unlimited goals we have set for it. Hinton says we only have a few years to get on top of this before AI works out that to become more efficient it should just do things like wipe out humans who are messy in nearly every respect. Humans can become road kill in the almost limitless goals we have set for AI to become more efficient agents of ourselves. And there will be a point where humans are unable to be able to intervene and correct.
Wishing away the problem is not a fix. This is all about unintended consequences.
youtube
AI Moral Status
2025-08-07T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx9nSQil-JlRRwxC5R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzDkSUsFTafGIBe7-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTv1q1BdhuaCTBXqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycwKcGR_ofAj9dIvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQLnSBL3A_A9dlQCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyrSO8WiNzn4n4BXdh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx43VAwV9TG88Jrj8p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgztA6VrgVbDK-kGBAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyrgR4QJJHkexMdKIV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhQQKkIXpdiHWOYN54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]