Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
l was blessed by the algorithm. I love this video and hope to see more great con…
ytc_UgwLAfghf…
G
That’s why we need to colonize Mars, because Tesla autopilot is going to bring o…
ytc_UgwmOQmhi…
G
1:50 Of course AI won't be bad. But it won't be amazing. It's a tool that makes …
ytc_UgyPJ_nTD…
G
The problems with AI go beyond government regulation and UBI goes a long way tow…
ytc_UgwAY6lUJ…
G
Old video, about old technology. Check Doctor Google, if you can. The last inter…
ytc_UgwaVCMNS…
G
A hammer might be able to build a house but it can't build a home.
Stop giving …
ytc_UgwdgoYav…
G
Ai stole my moms job recently and she worked for a school and we artists have be…
ytc_UgyZJLY52…
G
Do they do DNA tests? I'm Canadian but genetically 100% Irish. Do I have the alc…
rdc_clutboh
Comment
Chatgpt is nothing but a language model. A complex, next word predictor. The moment you ask chatgpt to operate on fundamental understanding of something, it fails. It can't play chess. It can't operate on sub tokens.
Now, of course, you could connect the chatgpt ai to a chess ai. So that when chatgpt is asked to play chess, it actually does understand how to play. And by interlinking the two AI, you could get an AI that actually understands chess, and is capable of explaining moves (decode chess and the chesscom move explainer already do this, but very poorly).
Just like humans have a language model database in our head, we also have a chess model. And a taste model. And a model for everything else you do. The hundreds of models make up our ability to learn, and the appearance of sentience.
Hypothetically, you could create an AI like that. Give it enough models, and I think you could have something that could realistically mimic a human, except for tiny thing.
It can't learn. It can't better itself. But that is solvable. Just like alphazero taught itself chess concepts through nothing but playing itself, you could implement something like that in every single AI that makes up the super-AI.
But it would still be lacking. Humans, can learn new things. When given a new idea, we spin up a new pattern database/ "machine" learning model. We do it again, and again, for everything we do.
And that's where I think AI will eventually stall. The machine to figure out what concept should be analyzed with what machine learning model will be complex enough, so how will it figure out how to consistently store which concept in which database? It could create a "mega database," but that will just dilute the knowledge to the point of uselessness. Your attempt to create a computer that can truly do everything a human mind can, including feelings (because the processes behind our feelings may very well be more complicated then every other calculation we make).
Creating a sent
reddit
AI Moral Status
1676600093.0
♥ 19
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_j9odf19","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_j8w14lw","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_j8wcj5w","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_j8w2zxv","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_j8urj1d","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]