Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
6:40 I find the theory that human language operates like a large language model (LLM) compelling—especially when it comes to how our brains learn and update meaning. In LLMs, words are just statistical associations: patterns trained on more patterns, all nested in context. And in humans? It’s not all that different—except for a few crucial additions. Human language isn’t objective either. It’s recursive, contextual, and self-referential *all the way down*. Take the word “rizz.” It’s new slang, but we grasp it quickly by linking it to other words: charisma, confidence, seduction. But we don’t stop there. Those words bring in memory, emotion, tone, and even a felt sense of who says them, when, and how. That’s the difference—not just more associations, but sensory ones. Unlike LLMs, we don’t just map words to other words; we tie them to experience. I see the color red and associate it with the word “red” — not just because I learned the definition, but because I’ve seen it, felt it, even bled it. A blind person doesn’t form that same visual link—just as an LLM can’t. For them, “red” is a web of associations, but never a phenomenon. And in a strange way, that makes language feel solipsistic. We all point to the same word, but what we point from differs wildly. *So where does identity come in?* It’s what tunes our internal “model weights.” It’s the slow calibration of values and expectations based on every interaction—every sensory moment, every word we’ve heard, every role we’ve played. *Identity shapes which associations get reinforced and which don’t.* It’s the alignment layer—the filter through which we process language, emotion, memory, and behavior. If I identify as political party X, it’s not because I “chose” it out of logic alone. It’s the result of a long series of associations and reinforcements. My values—my alignment code—have been shaped through sensory experience, through the language I’ve heard and used, through the stories I’ve absorbed and internalized. LLMs have pretraining and fine-tuning. We have childhood, adolescence, media, trauma, love, conversations. In the end, we might not be so different from LLMs. Just more embodied. More embedded. More felt. But still—a system of associations, shaped by recursive loops of input and reflection. These are just my thoughts. Based on strange intuitions and new information. I just think it's interesting.
youtube AI Governance 2025-06-16T17:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwZS1LORkYzRIcS4O14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyBP7fwqfdHe0Xvsnl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyh1LR--urrRq-HSp14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzT4z7xhbCF8MX-96p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy1IiMOJrKZ3TjQfyp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw-5DApa-fnuJgtwtB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxnuttQbXc48pCGqKl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxqbIwrEGLuk2nrQGd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwh-HF0JLA0SEMT9dR4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy6yyd7NErXprxXjs54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]