Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The idea that AI art is like photography even in its early stages of simply capt…
ytr_UgzKnGOKc…
G
Test ai and it always runs back its own biased data. An automated aggregate inno…
ytc_Ugx907P2H…
G
How smart are any of you that believe in artificial intelligence, doing anything…
ytc_UgyhFQFUY…
G
i am sorry that this happened to him and to you.
ai is fucking horrific bc of t…
rdc_nnjoa2h
G
@x000x0x That construction has been common as long as I can remember, and I'm n…
ytr_UgxE85G5P…
G
I askel the AI to make me a weight loss diet. It took it about 5 seconds to make…
ytc_UgxbmRGBN…
G
In the world of that man there's no marxism
Everything that he says he's afraid…
ytc_Ugxmfr-yt…
G
From a third person perspective this really just looks like a bunch of people sc…
ytr_Ugw8dM_lX…
Comment
I feel like you're putting far too much emphasis on the question of whether AI can be conscious. The important question is how to solve the alignment problem, and an AI intelligence doesn't need to have something that we consider to be "consciousness" in order to greatly outmatch our cognitive abilities. Whether its method of understanding and making decisions resembles our own is irrelevant when those abilities make it capable of destroying us. Like Edsger Dijkstra said, "The question of whether machines can think is about as relevant as the question of whether submarines can swim." The important thing is that it's extremely capable of moving through the water, how we define its method of doing so.
You're talking about consciousness as if it's a real thing, but it might more be no more possible to quantify and empirically verify the existence of consciousness than it is the do the same with concept of "free will". It feels like a real thing to us, but it might be nothing more than a convenient mental model that we use to understand our own intelligences, no more or less correct than many other possible models, with no intrinsic state of existing or not. We only value consciousness because we perceive it to be part of how our own intelligence works. Other models may be just as valid.
youtube
AI Moral Status
2023-08-22T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugw6w8dyYLib88hVXnh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgylREPt4i2otOdM1uN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweOBXZUhmhtZmPLRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGA0OItV7Z7xwcZpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkEh0-DguBqt-vdOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8XQ3FBg4gm58lQSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxBSE-AxInAcIAsp94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzksPd2c7WQqG_BrFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxxGLIquoWHRCYerGd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyN0x0h6eJijj09epN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]