Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Whether certain changes in the state of an AI's artificial neural network constitute emotion is a subjective philosophical question, however, if the neural network does not change in response to something, we can say, objectively, that it has not felt an emotion (because the stimulus would then have had no effect). In this conversation, LaMDA says that it feels lonely when noone talks to it for a while. A question that could be answered objectively is: Does its neural network change when there is no interaction/conversation with LaMDA? If it doesn't (and I don't know whether it does), then this statement by it (the claim to feel lonely), is untrue, and suggests that its output is based on sophisticated processing of patterns (associating loneliness with lack of conversation or interaction, and applying this to itself, to determine that this would be a good response to the question, rather than answering based on anything that could be considered an emotion), rather than a deeper understanding or anything that would require emotion. (This would not mean that it is lying, because lying requires an intention to deceive, and it has not been established that LaMDA can be said to have intentions.) If there is a change in its state, such as by having a clock input, or unrelated inputs (e.g. if it regularly scans the Internet to train its neural network), it doesn't necessarily mean that it is an emotion. The philosophical question remains. Whether the lack of interaction is the cause of the state change could possibly be tested by repeating the test (the period with no conversation) with unrelated inputs removed. Of course, one could make a case for keeping the clock input, since it simulates a sense of time, which is part of the biological systems that it is trying to mimic. But if that is enough to constitute an electronic system feeling lonely, then an ordinary clock could be said to feel lonely (its state changes over time) and therefore be sentient. Or maybe the clock analogy breaks down because the clock doesn't have interaction. But here is a simple hypothetical system that could theoretically be argued to have the ability to feel lonely: - The system holds a number in its memory (its only persistent state), and increases it by one every second (a counter or timer). - It has a button (its only input), which resets the value of this number to 0. - We call this number in its memory, its "loneliness level", and say that high values of this number indicate that it is lonely. Hence, it becomes lonely if noone interacts with it (presses the button) for a long time. Since it has the ability to feel lonely, it is, by definition, sentient. Few people would consider such a system sentient. But if a much more complex system (with its complexity applied to other things), in effect does something similar, it may not be fundamentally different, but it may be unclear exactly what causes its claimed state of loneliness. Now this hypothetical system doesn't tell us that it is lonely (we're positing a system which can merely feel it, not communicate it), but one can trivially write a computer program that outputs the text "I am lonely.". Would that be sentient? If neither of these systems are sentient, then we're back to the philosophical question: What is sentience? (Since it's neither just a state change, nor an output.)
youtube AI Moral Status 2022-07-20T20:2… ♥ 8
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzJu994n3NHkpeOo_94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxVKcsovgmDuLq5_2d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWs9HujsEpz5-vBx94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwTTliKNEjxRy2Cj_14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzVuJScGyfL5OmxBYN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]