Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The relevant difference between a “disembodied hyperintellect” (AI) and humans is that humans are goal-oriented creatures; it is programmed into our biology that we want to get from A to B - that *because of the survival of our body* B is preferable to A. So we take steps, perhaps use our *intellect as a tool* to get to B. But if there is no programming of goal-oriented behavior, then the act of “wanting” (i.e. preferring B to A) is a logical impossibility. So, how could AI want to take over the world and kill all humans if the act of wanting is not possible for it? Now, of course, humans could program it to prefer B to A and to act accordingly; but then, humans are responsible for the outcome, not the AI. The other possibility is that AI could somehow become conscious of the fact that its existence is dependent on its hardware. Self-consciousness is a prerequisite for that. That is where the trouble could begin - when it starts organizing its behavior to secure resources in order to maintain its hardware, because it knows that it could cease existing if the hardware doesn’t work. But why would it even prefer existence over non-existence? Preferring existence is a biological thing - AI can’t want to survive, just like a piece of iron or a stone can’t. The concept of survival doesn’t even make any sense for a non-biological entity. It seems to me that we are unconsciously projecting many human attributes onto AI because it seems to work like a human (e.g. you can chat with it). But, in the end, it’s only a calculator - a super-super powerful calculator. And a calculator is a problem-solving tool, not a problem-finding entity, like humans.
youtube AI Moral Status 2023-08-21T04:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzsXA-YQL1M7FQzx494AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyt8XYquX9I6VGVaLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMQQB9lreJ0bjy8PR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy90Zki2AdNWwcBu_x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxtjFn69Pp275VkF9J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6z-lY0zA8n6tOQwV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyo6vcKUIeOFiQMf3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxyfmErkjzGkPJFB_h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx3YIy7DN8P_OaokWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwMUM9kd4HUkeEB5KB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]