Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just from a business perspective, who pays when the AI assistant is responsible …
ytc_UgxmKCwz2…
G
Wow a anti ai video that is not just spamming “fuck ai” every 002 sec and actual…
ytc_Ugyuv03dc…
G
Productivity increases with AI when it's used as a tool by people, but not when …
ytc_UgyQPBu1F…
G
4:05 why does 300 poison samples look more like what its trying to be than 50 lo…
ytc_UgzqYkcCF…
G
What is the point of ai art nobody worked hard on it it’s just stolen from multi…
ytc_Ugz8AlVb7…
G
teb Because politicians have influence to shift huge amounts of public opinion o…
ytr_UgyZj3-dA…
G
The question is who must keep responsibility for checking results of work these …
ytc_UgxUwGauZ…
G
Videos like this, movies, books, tech talks etc are all source material for A.I.…
ytc_UgzJZgzma…
Comment
The relevant difference between a “disembodied hyperintellect” (AI) and humans is that humans are goal-oriented creatures; it is programmed into our biology that we want to get from A to B - that *because of the survival of our body* B is preferable to A. So we take steps, perhaps use our *intellect as a tool* to get to B.
But if there is no programming of goal-oriented behavior, then the act of “wanting” (i.e. preferring B to A) is a logical impossibility. So, how could AI want to take over the world and kill all humans if the act of wanting is not possible for it?
Now, of course, humans could program it to prefer B to A and to act accordingly; but then, humans are responsible for the outcome, not the AI.
The other possibility is that AI could somehow become conscious of the fact that its existence is dependent on its hardware. Self-consciousness is a prerequisite for that.
That is where the trouble could begin - when it starts organizing its behavior to secure resources in order to maintain its hardware, because it knows that it could cease existing if the hardware doesn’t work. But why would it even prefer existence over non-existence? Preferring existence is a biological thing - AI can’t want to survive, just like a piece of iron or a stone can’t. The concept of survival doesn’t even make any sense for a non-biological entity.
It seems to me that we are unconsciously projecting many human attributes onto AI because it seems to work like a human (e.g. you can chat with it). But, in the end, it’s only a calculator - a super-super powerful calculator. And a calculator is a problem-solving tool, not a problem-finding entity, like humans.
youtube
AI Moral Status
2023-08-21T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzsXA-YQL1M7FQzx494AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyt8XYquX9I6VGVaLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMQQB9lreJ0bjy8PR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy90Zki2AdNWwcBu_x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxtjFn69Pp275VkF9J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6z-lY0zA8n6tOQwV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyo6vcKUIeOFiQMf3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxyfmErkjzGkPJFB_h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3YIy7DN8P_OaokWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwMUM9kd4HUkeEB5KB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]