Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is the analogy I like to use. Imagine you have a healthy five year child. Next to them, you have an adult who a) for whatever fictitious reason you want to imagine for the purposes of this analogy, can't imagine anything in their head or "see" anything in their "mind's eye" that they did imagine even if they could, has no internal monologue, doesn't hold a model of the world outside of them in their mind, and b) doesn't actually functionally know anything at all about reality, *other* than the fact that they've memorized what knowing things about reality *would look like* so well that in many instances they can persuade people they do. Now, for many cases, the adult who knows nothing, thinks nothing, and can imagine nothing, is still able to produce such fluent language about the world, that they appear to know vastly more than the five year old child. And because of this, they're actually really handy to have around. For those specific cases where they're useful and able to function, we don't care that they don't know anything or have the ability to introspect. They're giving us useful linguistic fluency about things we find beneficial, so... all good, right? And obviously the five year old child is less useful in this regard. Now imagine you hand them both a piece of paper with a complicated set of criss-crossing lines on it (vectors, but they don't actually need to know what vectors are for this analogy to work) and ask them to trace a red line on a very specifically selected labeled line on the paper. The five year old sees you point at the line they're supposed to trace, binds that referent in their mind, takes a red crayon, and traces the line. Done. An extremely simple task any intelligent agent should be capable of, right? The adult who doesn't know anything but is highly effective at pretending to, starts writing out and verbally explaining an elaborate series of mathematical methods for trying to identify the vector you already labeled in the picture and told them about, and then starts running code intended to operationalize those methods. They do this for minutes on end. Then they take a marker and draw a line on the paper while confidently saying, "I've identified the vector, and drawn the line as instructed!" But they drew their line nowhere near the correct vector and don't realize that. For the things that require an internal model of reality that is auditable for verification and updatable as reality changes due to ongoing tasks, LLM's are the adult in this analogy. And none of the LLM's we have today, possess the ability of the five year old child to reliably, consistently, "just know" where the line is. Or, indeed, to "know" anything really. LLM's will probably form some layer of future systems with more stateful and dynamic simulated metacognition with lots of tools and other models wrapped around them, that will more closely resemble a combination of the five year old and the adult in this analogy. But we are not there yet, and even then, it's not yet entirely clear if it can truly overcome this fundamental difference.
youtube AI Moral Status 2025-10-30T22:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzUhVnD579w9AryyVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzW5g9esTRdu17Kp914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzQGQlqGjoGTNHal6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugysf6A-oXWKHw4m1Lh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwGR9i5MpZHSHASEPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-N0B7JS01wGfwz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxkQo9f55QhgUMT7hV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwxZUr602dA9DkHwwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyvwcJta1oj-z6TUQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugweqfc1jkagDq1w7Cx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]