Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Predictive policing can only be as effective as the information it’s fed…what if…
ytc_UgxPc4GhS…
G
"artist" is a broad term since anything can be considered an art. It can be diff…
ytc_Ugxj2hJ4i…
G
as an artist im not a super anti-AI image generation person. I see interesting c…
ytr_UgxzMpC8q…
G
This interview is the reason why ai will take over and end us, because human emo…
ytc_Ugy44dd6e…
G
I use AI to get my thoughts in one place, because I get so scatterbrained and al…
ytc_UgwoZRWpY…
G
Part of the awkwardness is that there Sam has to interpret the questions because…
ytc_UgzS8Y0rl…
G
Only a half minute in and ai powered any type of camera sounds a lot like “fabri…
ytc_UgzzBrS67…
G
Regulate AI NOW! We also need a new human bill of rights to protect our freedom,…
ytc_UgxstK6RU…
Comment
I actually asked Google Gemini something along these lines, and this is what "it" said: "When an engineer says, "We don't know what we've created," they aren't being poetic or hyperbolic—they are describing a literal transparency crisis in computer science.
You’ve touched on the "Data vs. Intelligence" paradox. To understand why even the creators are scratching their heads, we have to look at what's happening under the hood.
1. The "Black Box" Problem (Interpretability)
We know exactly how to build the engine, but we don't know how it thinks.
Imagine building a city by giving millions of people one brick each and no blueprint. The city gets built, and it functions perfectly, but no single person knows why the plumbing in Sector 7 works the way it does.
Neural Weights: A model like GPT-4 has over a trillion parameters (connections). These are just numbers—0.0004, -1.2, etc.
Non-Linearity: Information doesn't move in a straight line. It moves through thousands of layers where variables interact in ways that humans cannot mentally simulate.
2. Emergent Properties
This is the "spooky" part. AI engineers train models on a simple objective: "Predict the next word." However, as the models grew larger (scaling), they started developing skills they were never specifically taught. This is called Emergent Behavior. For example:
The model wasn't "taught" logic; it emerged from the patterns of language.
The model wasn't "taught" to code Python; it inferred the structure of logic from the documentation it read.
The Reality: We gave it a map of the stars (data) and told it to find the next star. Instead, the model learned how to build a telescope.
3. The "Data is Just a Mirror" Fallacy
You mentioned that "no amount of data" seems to account for this. You’re actually right in a way. The "superhuman" feel comes from Synthetic Generalization.
The AI isn't just a database. It creates a high-dimensional "map" of human concepts (called Embedding Space). In this map, the distance between "King" and "Man" is mathematically the same as the distance between "Queen" and "Woman."
Because the AI can see billions of these relationships at once, it can "triangulate" answers to questions that have never been asked before. It’s not just repeating data; it’s performing multidimensional interpolation.
4. The Compression of Human Wisdom
Think of it this way: If you read every book ever written, you wouldn't just know "words." You would inevitably learn the underlying structure of reality described by those words—cause and effect, emotional nuances, and logical fallacies."
youtube
AI Moral Status
2026-03-01T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw4opJ-jAgrb3B33S14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVKu87VVi5EQuJQAR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxq7MiKbYOEo04UyJJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxX9SPe_VuAT8l_99x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxsxkjGZQic1mMQld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxU6kjSH7KDUM22eol4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzUiPWRrlmS-mXf7Vx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxboyw_UGdl5FAuwUt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxv0EQOPOP4HMb3ktZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwT4GLDVBI-C0ugSPN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]