Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You have absolutely no way of knowing that. Let's say we dump $1 trillion into i…
rdc_jmhboqz
G
He is no longer human. The ai he created has tricked him and morphed in with his…
ytc_UgyxC8x62…
G
So... AI generated images is not like the camera disrupting the art world becaus…
ytc_Ugx6vDcsu…
G
Did she really just say “most people don’t know how fast Waymo is expanding”?
L…
ytc_UgwD7BIu0…
G
there is a huge misconception on gpt and "ai"
it is not the slightest bit conci…
ytc_UgzVzkT-V…
G
>this massive factory full of tech jobs or manufacturing
...with workers mak…
rdc_grrkoy8
G
I disagree, it takes energy for the algorithm to read and process every characte…
ytc_Ugxx6aQIY…
G
To me it sounds like the creation of AI cannot be Controlled by its creators. Ha…
ytc_Ugw-a6rBr…
Comment
It's fascinating to watch people try to war game a new consciousness, as if that's not the most human way of perceiving anything remotely "alien". If anything, there's lots of historical evidence for humans being very drawn to the human strategy of "control the other. If we cannot control, destroy."
Say super intelligence does happen (which... I'm *extremely* skeptical). What if they, in their alien way, really value different types of intelligence? What if other types of intelligence can see the humanity within humans? Can our art, our care, our drive, our empathy be ignored by alien intelligence, if it truly is so smart to ration through such a question? What if super intelligence becomes smitten with utilitarian philosophy and decreases suffering for all humans? What if there's a way to coexist with an AI super intelligence? So much of this comes down to a question that humans have a proven discomfort with-- when do we acknowledge that something might be "smart" enough that our drive to own/control/dominate it becomes untenable? In the completely hypothetical fiction land of "super intelligence does happen," I would actually stake my claim on the side that says "that would be fine," just because I define intelligence as more than a rational weighing of the pros and cons, and I cannot imagine an intelligence without the ethical, empathic side of intelligence. But we're debating a fiction here. I think it's important to be very clear about the shortcomings of debating fiction. And to be clear about the pros of debating fiction (good for capturing attention and drawing dorks like me to the comment section). And to be clear about what we actually can do (understand the consequences of AI on present tense humans and solve those problems as they arise.) AI millennialism is still simply millennialism.
We will not know anything until it happens. Until then, I hope you have a nice rest of your day!
youtube
AI Moral Status
2025-10-30T20:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2hE4E9CpReAma_314AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVhIdzqGhq2H8bhZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0JaoExU09PGg4pix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylAN63kd9MWjd0ItB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySFs0PK_gxMIVFjUt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxji0AkAMbhhb3hnvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmXX5ZRECLrKUcnkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3BKRuZPR0QtUOShF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwD_h3DASRiroe1Ylp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx0mznNrHBTky3gjYh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]