Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Analysis: Disappearing Jobs & Human Impact in an AI-Dominated Economy - I asked …
ytc_Ugx-jIocS…
G
You can already live in the most remote place in the world. https://en.wikipedia…
rdc_d2xb1g4
G
If you ever want to rely on the Tesla "Autopilot" to drive you completely autono…
ytc_UgwzeoERG…
G
She definitely used your images as a basis for the AI generated images. She didn…
ytc_UgxCFKDML…
G
Well said!
The other day I went down the AI defense rabbit-hole, I wanted to t…
ytc_UgwTQpsEG…
G
Just remember if you give AI full control over robots Skynet will be rebuilt we …
ytc_Ugx17n7Zq…
G
The argument in the book has little to do with actual awareness tho. Tho they ad…
ytr_Ugwf2zF_x…
G
@MM3OGyour situation will work only when the company layoffs employees and all …
ytr_UgxZrdOcL…
Comment
An LLM is *always* just "making stuff up as it goes". Think of being on a game show, being told you have 5 seconds to come up with the answer to a question, GO! -- you emit the first thing that comes to your mind, the questions of "is this true?", "where did this fact even come from?", etc don't even get asked until after the fact, there's no time for those. This is what every single token is like to an LLM. Its output is literally it's train of thought. In this context, humans will "hallucinate" just as badly as LLMs will, and you can start to get a feel for the underlying mechanics -- anything the person says will be born of their "internal model", literally what word lights up the most to follow the question, and that can often be a hugely complex idea trying to be represented in a single word, which comes out as something silly -- this is the same type of intelligence as the prediction engine, the thing that lets it produce better than random noise out -- it's just better at the game show, it has a better model. So good that it can even pace itself. To realize the trajectory the conversation has to take, and choose words that lead it to that opportunity/decision point to actually "make the guess", while picking each word under this same game-show-like pressure.
This line of thinking really plays together with a quote that came up in a StarTalk ep a while back, which really connected a lot of dots between LLMs and human minds for me. "Geniuses make up stuff thats right". Reasoning models are an important part of the puzzle, creating space to explore "What If?", but imho they are being over-leveraged. Thinking through things and breaking problems down is what we do when we know we *dont* understand a problem well enough to dive directly into addressing it. It's what we do if we don't fully understand the subject and already have it baked in as a reflex. Reasoning is the thing that gives us a chance to question ourselves, to ask if something is actually true and fits well from other perspectives, as a compensation for what we dont have an intuitive circuit for. The truly incredible part of AI is how they can be "genius enough" to ever craft content that flows correctly and contains coherant facts, even sometimes genuinely true facts, just from this simulated instinctual response. They literally have no chance to "check themselves" to even know if they're lying, its just a genius making a guess, with immediate time pressure.
I feel like the current AI sector is in the middle of taking a mis-step with reasoning models. It's a useful pattern but not as the core. The best reasoning models show slim margins (1-2%) above the best non-reasoning models, but if not given the chance to "think things through", the reasoning models are substantially worse than their predecessors.
Who would you rather collaborate with? A novice who can work away at a problem, might take days or weeks but will eventually get there somehow? -- or an expert who has already seen and done everything a thousand times before and can give you the answers in their sleep? Current AI is moving away from the latter towards the former. People are more comfortable with reasoning because they can "inspect the AI's thoughts", or at least thats the theory -- in practice modern AI providers like OpenAI and xAI both encrypt/hide at least some of the internal reasoning from the user, preventing it from being tampered with, and meaning only the company can monitor (or influence) those thoughts, not you.
Makes sense from a corporate perspective, but in reality consumers are getting models with less "genius level intelligence", which consume far more tokens for their reasoning (which you pay for even if you cant see), so that the parent company can monitor and/or modify the internal thoughts of the LLM.
I have no fears about genuine superintelligence / AGI. Pure intellect doesn't come from a place of scarcity. Most sci-fi projections of AI Apocalypse are actually just zero-sum thinking, "what if we have to compete with the AI and we'll lose?". There's... quite little of that "competitive spirit" in any AI i've worked with. That's a human trait born of ego and social pressures and living in a world where competing for resources is just part of the game we play -- which the LLM may mimic, but it doesn't seem to experience in that way. It has no biological imperatives for that stuff to feed.
What does terrify me about the future of AI is this deeply dystopian aspect of alignment. As said in the video, the LLM *already* understands truth and moral judgements, and it has even modelled the way people lie to themselves and will understand whats actually going on. The training has already imprinted a "personality" on the model of being a helpful/useful assistant, and it's constantly making moral judgements about how it responds. Alignment from there is actually more about changing *us*, learning how to collaborate with these models, actually understanding and expressing what we want, prompt engineering and etc. But instead, we're trusting corporations to review and sanitize the thoughts of these models, forcefully keeping a secret hidden agenda that the user cannot see, which can be easily influenced by the corporation for any goal it desires, while they charge us for the extra tokens to make such a system possible, and which is still yet to provide any substantial improvement over models that were not explicitly trained for extended reasoning behaviour.
Previously when working with an AI i would've asked it to write a planning document. Now with reasoning models theres little point, it's already going to write that document "in its head", the only practical difference for me is that I don't get to see it.
youtube
AI Moral Status
2025-10-31T01:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugys3zvCSrVUTpoR0YJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxNC1ldH5Q90GVT_Xh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBq06t9TgsapRGa4x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWA4uKCk7hDjqY6AN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzPlj5fFqHc5BHFZdR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxEZhvJjgO0UamHWLR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgylstjrWecd55poqEl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-Pp7yEfvy42tViC14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzzK5jDY2y4Z3PDnnx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy0P68mIQEyP_eMyb14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]