Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@andreajohnson6030yeah this is weird. I have to assume that Hank isn't familiar …
ytr_Ugzr2YCzK…
G
AI will lead to the downfall of society, nations could use it to start wars and …
ytc_Ugw-vzput…
G
@nirorit papers have been published where ai was able to create its own training…
ytr_Ugz7kkmiy…
G
While true, the only thing AI did with regard to deepfakes was make it much easi…
rdc_nzmwo1g
G
from this and previous videos I can tell the AI is learning and is getting bette…
ytc_UgzdOtz3U…
G
I think that most of the people that think AGI will happen by 2030 or any reason…
ytc_UgyZ2mvFg…
G
Ideally AI would be used to generate appropriate responses from NPCs instead of …
rdc_lgst6tc
G
This is totalitaire regime. Communisme mixed with Data and AI is super dangerous…
ytc_UgyjEY6V9…
Comment
I think most of what you're saying here is roughly true...
I'm definitely not an AI doomer but I'm also not one to think that AI is going to change the world to the same degree that people financially invested in it are hoping.
I use AI everyday for work, it is helpful, but even something like software engineering where it's almost perfectly structured to be useful. It still is not really that great.
Even the code from the best models is mediocre.
And limitations around context and the inability of the AI to actually learn as it goes outside of just expanding context, our real limitations to its ability to be useful.
I think that the fundamental technology is not the correct technology....
In other words, I think this is a good stepping stone, but I think that whatever the technology will be that brings out more flexible, intelligent AI, it's probably not going to be transformers... At this point to me that feels quite clear
The self-grounding stuff like deep seek, and the deep think type models where we're going to let it think without text being displayed, all of those things are going to make it better.
And already have.
But fundamentally if I'm working with a model, as soon as I clear its context because I start running out of tokens, it has to start relearning everything that it already learned about what I'm doing.
It's also slow, and extremely expensive to run.
I think that we've basically taken an imperfect solution of transformer models and basically push them nearly to their limits.
Whatever brings us to better free-working AGI type models is absolutely not going to be transformers in my opinion.
It's going to be a novel technology or a radical transformation of the existing technology.
And it's definitely not going to be in a couple years.
youtube
AI Jobs
2025-10-23T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzRVWgPjze54P6qKKN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLfXwvlp_VDoZVH2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwLqWxwjQU2CbWzCGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwAOy2Z-b_I6fAwboV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwHzGAdg240r9Q5kRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzNS3bVwYkZMaUwaId4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2VbAQ5Scbv6c_rtB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugx7b0IouzYcM_VVv1x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxb-j3pjJQntAZTqhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwy4DPilW4jFR3boFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]