Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are there any other kinds of "AI artists"? Them calling themselves artists is de…
ytc_UgzTANRnW…
G
Well the thing with media not covering people are rising up - it doesn't mean it…
ytc_UgxFr6vtZ…
G
Alternate title: LavenderTowne strikes back
Great video. The Hayao Miyazaki sec…
ytc_UgygYRXvM…
G
Is it just me or did ChatGPT misundertsant the question about the five people wh…
ytc_Ugz7t2JIA…
G
There should be an international label mark in every thing where you have used A…
ytc_UgwBquGZr…
G
Education. Trade schools in particular.
No, I'm not talking about teaching bur…
rdc_denps9k
G
I saw something awhile ago for AI in California. It was asked which is worse: so…
ytc_Ugyhv1pUf…
G
If the United States starts to use facial recognition (similar to China), it's b…
rdc_fvzuh9m
Comment
I share plenty of his concerns, but I’m a bit surprised how many really smart people in this space don’t quote grasp that current AI models cannot have “human-like” qualia. They Do Not have multiple recursive (internal & external) sensory inputs all being integrated into cohesive perceptual frameworks to act in a world they developed in, and remain embedded and embodied in.
They don’t have persistent inner thought happening after they deliver a response to a prompt. They are sort of “dormant” until then next external prompt. They don’t have DMN (default mode network) activity where their mind wanders, recalls memories, reflects, etc.
Any subjective or “proto-subjective” awareness they have or will have will be fundamentally different.
I’m not saying that makes it less dangerous... I don’t know; could make it even more dangerous if it doesn’t understand or value the inner experience of an embodied agent in the world.
“Alignment” by control seems like it might be fundamentally impossible. Perhaps a better goal is how to develop super AI that would choose to align.
The primary problem is defining what we even mean by “alignment”, as it will inherently mean different things to different people groups, cultures, religions,
etc.
Humans have a tough time “aligning” with each other (and sometimes even aligning our own internal states) as it is. AI has to be able to be able to recognize that ambiguous space of human contradiction not as a bug or problem to solve, but as a feature to navigate relationally.
If we can achieve a more Relational Alignment we might stand a better chance.
youtube
AI Governance
2025-09-07T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyyuJRpTRRfe6tU7bN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNw0DZYUEQvcmMl6B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwanUTAdZxjNZ_nudp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxGmM5F3ehbWNy0gpp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzugsPcK8K3BSFSfU14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVFpDjhxCEzuMQkox4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCV2EP3gNQf-jQtVJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKVw0G_x7R0rxbP6B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwe9eoKSDFFzwkr8-F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwTBA9zbuKtkxoY5S94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]