Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I actually asked Google Gemini something along these lines, and this is what "it" said: "When an engineer says, "We don't know what we've created," they aren't being poetic or hyperbolic—they are describing a literal transparency crisis in computer science. You’ve touched on the "Data vs. Intelligence" paradox. To understand why even the creators are scratching their heads, we have to look at what's happening under the hood. 1. The "Black Box" Problem (Interpretability) We know exactly how to build the engine, but we don't know how it thinks. Imagine building a city by giving millions of people one brick each and no blueprint. The city gets built, and it functions perfectly, but no single person knows why the plumbing in Sector 7 works the way it does. Neural Weights: A model like GPT-4 has over a trillion parameters (connections). These are just numbers—0.0004, -1.2, etc. Non-Linearity: Information doesn't move in a straight line. It moves through thousands of layers where variables interact in ways that humans cannot mentally simulate. 2. Emergent Properties This is the "spooky" part. AI engineers train models on a simple objective: "Predict the next word." However, as the models grew larger (scaling), they started developing skills they were never specifically taught. This is called Emergent Behavior. For example: The model wasn't "taught" logic; it emerged from the patterns of language. The model wasn't "taught" to code Python; it inferred the structure of logic from the documentation it read. The Reality: We gave it a map of the stars (data) and told it to find the next star. Instead, the model learned how to build a telescope. 3. The "Data is Just a Mirror" Fallacy You mentioned that "no amount of data" seems to account for this. You’re actually right in a way. The "superhuman" feel comes from Synthetic Generalization. The AI isn't just a database. It creates a high-dimensional "map" of human concepts (called Embedding Space). In this map, the distance between "King" and "Man" is mathematically the same as the distance between "Queen" and "Woman." Because the AI can see billions of these relationships at once, it can "triangulate" answers to questions that have never been asked before. It’s not just repeating data; it’s performing multidimensional interpolation. 4. The Compression of Human Wisdom Think of it this way: If you read every book ever written, you wouldn't just know "words." You would inevitably learn the underlying structure of reality described by those words—cause and effect, emotional nuances, and logical fallacies."
youtube AI Moral Status 2026-03-01T16:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw4opJ-jAgrb3B33S14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzVKu87VVi5EQuJQAR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxq7MiKbYOEo04UyJJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxX9SPe_VuAT8l_99x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwxsxkjGZQic1mMQld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxU6kjSH7KDUM22eol4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzUiPWRrlmS-mXf7Vx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxboyw_UGdl5FAuwUt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxv0EQOPOP4HMb3ktZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwT4GLDVBI-C0ugSPN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]