Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
15:08 yah, as an epistemologist and a computer scientist this is pretty unwatchable. I feel like Nate is definitely overselling how much intentionality there is to the model's selection process. Whether or not he's intentionally doing it to sell his book is something the audience should be asking themselves as they watch this. When he's talking about automated hyperparameter optimization, Nate frames it as if it's a black box where a bunch of numbers enter and you get words as a result. But these numbers do have meaning, they modify the algorithms in such a way to make the training effective and the output readable. Taken out of context, they sound mysterious, but that's the case for every stochastic system. As an example, if you take a piece of software meant to generate procedural terrain, and automatically adjust parameters of that until the outcome is "realistic", then how you got that end result will also look mysterious. In reality, every one of those generations would be a valid, and the end result is just a tuning for specific qualities that are known to the programmer to be realistic. What's considered realistic is entirely arbitrary, as is the case in the development of AI models. The optimization process for AI just looks for the best number given the dataset and specific architecture to find what outputs the most human readable text according to the developers. These numbers don't determine the personality of an AI, its capacity for emotion, or its propensity to reference problematic literature; the numbers--or knobs--control the likelihood of a phrase or word to be connected to another. At another level, it could be more macro, like likelihood of certain ideas being linked with another, but that requires earlier steps to perform. What you won't find is hyperparameterization knobs for emotional response or values. Information can be manually limited by additional training and queries, but those are only determined by content linkage to existing datasets with a boolean to use that additional step. Hallucination is a required by-product of this process; there is no number, dataset and architecture combination that will perfectly encapsulate all the nuances of language--AI doesn't identify paradoxical reasoning, not unless it's well documented (but in that case, it's less identifying and more relaying). As such, models must rely on whatever is written, and extrapolate the meaning using the model's links found in training. The links determined by those numbers that seem entirely meaningless, but again, are just essentially weightings. Regardless of those numbers, whatever the AI spits out will be correct (not accounting for the intentional randomization added for simulated creativity). It will attempt to output the relevant links of the queried concept, using the links of the queried language. Now, it may do this wrong according to the user, but it will coincide with its training, which in turn coincides with the hyperparameters and will thus be right. It might be not true, but that's not really relevant, as responses are a better indicator of connected concepts rather than model factuality. There's also no chance for experiences, no. Like, anyone discussing the subject that is informed on AI is doing it for clicks unless they're being really liberal with the term "experiences". Personally, and I feel like you'll kind of see this in Nagel, I find that in order to qualify an experience you really need to compile from multiple sensory inputs and interpret it into a larger meaning i.e. an experience. The closest thing an AI has to sensory input is text and training data, but that's also what just about every piece of software has. How it acts upon that data is mildly more complex than most software, but that's not qualification for an experience. If I have a brain that's wired just to do extremely complex mathematical problems, but I literally only receive input in the form of equations, do I have experiences? I imagine most would say no. Intuitively, most would say that I'm not feeling anything, and my metabolic process is being used to upkeep an organic calculator. AI is much the same, it's not compiling from a bunch of senses, its a machine meant to calculate linguistic problems. Just like my brain in that example is a bio-mechanical machine to calculate mathematical problems. Like my brain in the example, the AI has rules it must follow, and hasn't come to its conclusions through experience, but through the following of those hyperparameters through training. You could argue a mathematician is doing the same thing as my brain in that example, and that a mathematician has an experience when solving a problem. However, the mathematician also has the capacity to go well beyond the links derived during an AIs training. They have the capacity to find completely unrelated topics and link them completely on their own to solve new problems. That requires more than just one input, and more than just rote memorization. Does your calculator have experiences? Certainly not, but it has the same sensory input as an AI.
youtube AI Moral Status 2026-03-22T04:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyHqPnr1Ht-Pn7AXxl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyCfwgdOu-QbVz0HyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwErXg5Y4J0qGJlLC54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwFQUwvMTtENF5tihF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwd99X7h1M6QXTiocx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgySvdEavWLJp0zf8q94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxPlo-W1TZx4Uj9BDp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBRbtJd1pvtrnhcyB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLnnUC_1P1nHgbDWN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxoJ3gfvuMqFgU9kIx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]