Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To have a self-driving vehicle, you need a rational and well-maintained road sys…
ytc_UgzIkdQoK…
G
The company appears to be a monopoly and as such needs to be highly regulated, i…
rdc_h8l96zn
G
How about any money saved from automation needs to first go into paying for a UB…
ytc_UgxIWW2lx…
G
Can they solve PhD math problems that have been solved but whose solution has no…
ytr_UgwOQOO4X…
G
Ok this was video was way too fun for an essay on AI consciousness, good job…
ytc_UgwzIyxLi…
G
Lol no shit because there are far fewer self driving vehicles on the road. What …
ytr_Ugz8Z2oze…
G
@Bamazon1990 that's BS.
Just because a company does something doesn't automatica…
ytr_UgzRGfo9s…
G
The presentation is self contradictory. On one side it says 100% yearly turnov…
ytc_UgxUZ0Eal…
Comment
@Dan-dy8zpSmall warning: I'm gonna hit you with a wall of text. Sorry about that.
Latent reasoning isn't really token-less, it's just CoT without fully running to output. The models are still feed-forward, and there is _no_ evidence provided by anyone making claims about LLM capability that CoT is in any way analogous to real reasoning or even real "chains of thought" - in fact the neurological evidence contradicts that conclusion pretty strongly. Language processing is one of the very last things your brain does with a thought you express (even if you may be using it without speaking for your 'inner voice,' you have the thoughts in your conscious awareness before you can convert them to language), and lesions on the area of your brain that handle language generation do not actually impair reasoning, they just make it difficult to express the results via language. Perhaps most tellingly here, humans have a _separate_ area for _comprehending_ language. It'd be unusual for that to evolve if the same biological hardware worked for both generation and comprehension. And it is _also_ not necessary for non-linguistic reasoning.
Ultimately, none of the models are tokenless because we genuinely don't know how to train one of them to do anything besides approximate the output of the sequential token probability function in the training data... because that Bayesian expectation value is the only way we have to automatically tell whether an output is "good" or not. Other "training methods" are only messing with the expectation values, ultimately - in operation, LLMs are still Markov processes and thus memoryless by mathematical definition.
Crucially, they are not _emulating_ reasoning, merely _approximating_ a function over a training window. It's called the universal approximation theorem. It doesn't extend to extrapolating outside the training window like we can, it doesn't extend to awareness, it doesn't extend to counterfactual reasoning, and it doesn't extend to mental models - it's just arbitrary function approximation given a large enough neural net. We've known they could do this since 1989. This isn't magic or alchemy, people like Soares are mystifying an understood process.
If LLMs were "reasoning" anything like we can, they wouldn't need CoT. They wouldn't need mountains of training data to tease the probability function into the right shape to get "10" as the most probable token after "5 + 5 = ". They'd be able to learn things like counting, addition, and multiplication from a few examples, and then be able to apply them to arbitrary inputs given enough time, like human children can. The reason they need big data is because they're not learning a rule, they're not being taught a concept they can apply - an LLM is just a dynamically weighted die with words on the sides, and big data is how you have to define the weights because low sample size isn't enough to get a robust probability function across all the available tokens. This is also why synthetic data fails to significantly improve scaling: it's just the same distribution as the training of the model producing it, repeated into new training data. It doesn't really add anything new.
Right now they're just throwing more hardware at the problem and hoping that will solve it all eventually, but institutional inertia and investing hype are very resistant to facts about diminishing returns (basically in every direction when it comes to existing models, including hardware). We're going to need a new architecture for actual AI, and that's not where all the money is going.
...Though, more to the point of the video, I don't really buy into the premise that a "superintelligence" is inherently dangerous. People have varying intelligence in varying fields that shifts over time, and it's very clear that differences in these things don't inevitably lead to the most intelligent ruling over everyone else or controlling society in some way. That idea is a naive extension of an already naive and self-aggrandizing mythology of meritocracy that's highly prevalent among the types of privileged people selling books about AI apocalypses or spending billions of dollars they don't have to boil water in the desert with the waste heat from AI slop generation. There's also no basis for the types of things these apocalypse scenarios often implicitly assume about a "superintelligent AI's" capabilities - usually somewhere between "something from the Xeelee Sequence" and "literally god," always with a general disregard for practical problems like... the laws of thermodynamics.
youtube
AI Moral Status
2025-10-31T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxPAlFOZf5P9-yFXq14AaABAg.AOvb0EjLhEBAOvrd0LxVqS","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzvjGjIcomV9nHpuVp4AaABAg.AOvahie80oqAOvhTcD3LNG","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzGbjN8CVd00WSMReB4AaABAg.AOv_zD-WUInAOva25XYIQ0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy2qgHGv3OEnQWYM6x4AaABAg.AOv_qIdZzySAOwP-ztNrwP","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwKdcT5U_wNFMATTTR4AaABAg.AOv_9RxO8JkAOvf9CzDXi9","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytr_UgxxdDNt4r7N-76IYD94AaABAg.AOvZhyouJMzAOxdODzVrzJ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxyifxnwY34q0k-lVh4AaABAg.AOvYrL2dXhZAOvZNoR1n4X","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxd4oIylRKGfL8P14N4AaABAg.AOvXWYOgdu0AOvb40fhEtB","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx16JT_uPYdtqD6HCV4AaABAg.AOvXDY1Mjm0AOvXMORTirV","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_Ugyu6z4Pp0svDkQdioV4AaABAg.AOvWlkghdIeAOwCOUTiPnj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]