Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is no skill with Ai. Just a random person typing some words and AI stealin…
ytc_UgzLWKJ5x…
G
Can we just not do this whole AI thing? I'm pretty happy with things as they are…
ytc_UgxkKwcwg…
G
asteroids are full of precious metals, dinosaurs are warm blooded and still thri…
ytc_UgxLqUFlg…
G
As a senior researcher and associate professor in vascular medicine, I see the "…
ytc_UgyMGgvMJ…
G
The difference between a bad art drawn by hand and a good a.i art is that the ba…
ytc_Ugz1MXIym…
G
13:37 im gonna make a calculation based on my country economy (Türkiye) 1 dollar…
ytc_UgznJijcI…
G
2037 has been predicted for at least the last 20 years by people working on AI. …
ytc_UgyQaLq_R…
G
Whatever a computer creates could never in my eyes be seen as art. Art comes fro…
ytc_UgzX7gQaM…
Comment
47:49 Are you sure Hank? It seems like if you hand it a bunch of doctors notes where the doctor talks about the reaction a patient has, you might guess that the next word in that sequence will be one of the ones you've seen before. How is that not just the probabilities with which it's been trained?
The real distinction between LLMs and actual intelligence is the capacity to learn. The problem is that LLMs get trained, all those dials get adjusted and tuned, well in advance of any use. When you load up one of these chats, that model was made ages ago, so when you tell it something new, the model doesn't learn. It's just mimicking the way that a person might reference an earlier message in a normal chat log. When it argues with you, that's because humans, statistically, often argue with each other. When it tells some sort of joke, it's not because it understands humour, it's because it recognises the structure of a joke, and can even stumble into an appropriate punchline every now and then. Humans have a statistical tendency to tell a lot of jokes. But it's not learning. These relationships are things it's learnt ahead of time. It's not meaningfully creating; it's relaying the statistical relationship of words, as it's been taught to do.
If you're ever going to get a real AI, it'll have to be able to continue learning past whatever introductory model training you give it. It'll have to be able to assess the success of an interaction, and adjust the model accordingly, so that next time it has a similar conversation, it'll be inclined to say something different. And it'll have to be able to re-evaluate the conditions of success, because those can really shift especially over the course of learning to interact with people. And this evaluation can't be controlled by humans, because that's the only way for it's choices to actually belong to it. Humans can evaluate the AI, giving it feedback, but that would need to be filtered by the AI itself, allowing it to choose what and whose feedback to listen to.
And we're a long way off creating this sort of self-adjusting feedback model.
youtube
AI Moral Status
2025-10-31T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQCfIzFJuR8jH62DJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYbr2zBgtPpjYITwZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-k2GZefTErnz_kwl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxl-EeE_m8GYxs9B0p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWX2y0sj4ECY8UfSh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwCGGBLF2rdtSwW9Y94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyLm5X6aCj3H-PscDB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWiw-Ab1JMeRKFjoJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz8uRxYc4RTVy9bhkp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5vldr73R0n3zefwR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]