Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
remember being told or hear it being said that theres someone out there for ever…
ytc_UgwnWI0tJ…
G
As a programming student, this is luddite pearl clutching. Art didn't die when t…
ytc_UgyvnRxqc…
G
He's the reason humans will probably go extinct in the future... All because he …
ytc_UgjiuibLh…
G
2 things:
1: AI is doing a great job
2. Men don't look for the details as like …
ytc_UgyX8--oD…
G
I believe facial recognition software could be a great tool, but ONLY if it is u…
ytc_Ugy5P6ad8…
G
If the ai didn't want to destroy all humans before, I think it might after this …
ytc_Ugyf7IpL6…
G
I want AI that will do my Laundry so that I can make art, not make art so I can …
ytc_UgymupHig…
G
The same shit happened with computer's.......
Millions apon Millions lost jobs …
ytc_UgwsiTdPr…
Comment
He's right in the conclusion, but his logic is terrible. Besides AIs don't "compute" the way he understands it. In fact they don't compute, it's why they are AIs and not programs.
He should revisit his notions if he is going to go around giving his opinion publicly.
The paradox of Godel is not a real paradox either, it's just a fallacy that it seems not too many people are able to recognize.
If you derive a statement from a set of rules and you assume that deductions are true, then the statement must be true, no matter what it says. In other words those statements are really meaningless, so you can't use them to self-contradict. It's a philosophical concept that people have talked for centuries, it's a bit surprising people do not learn. Funnily, it's also why AIs are also meaningless and why Penrose is right. AIs do have simple rules to chose what the next token is which are randomized. But they aren't "computations" anymore than an anthill or a maze are computations. There is no logic behind, or reasoning, while there is in Godel's setup and that's usually what people mean by computation. There is zero truth in the next token at all, it's simply relational. But if the AI says: "this statement cannot be refuted by AI", it does not mean anything, it's just silly. Same for Godel but for different reasons.
youtube
AI Moral Status
2025-06-25T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwvFuy-zcNrexPQEWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxENx4v8DR-hKlmAV14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz653HW58eAdV9B-F54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyek2bghu7OyBrvL_t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwvc20FlnZum3lN7zR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwu3QsSobK05qPQoA54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKPB-IeF6MKWEqPoN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLalFwqHauef3zW814AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzRn8WJMHhk7M347414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzrv6G1sb9PaVoXFAN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]