Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will never be conscious. In a hypothetical situation, Lets say you are the onlooker of an experiment being conducted on a man named Joseph. Joseph is in a closed room with a single door. The door has a mail slot in it that goes both ways, but we cannot see what is on the other side. On the other side, however, is a printer. By some marvel of science, this printer is able to perfectly imitate how Joseph’s best friend, Thomas would respond to any letter handed to it. Joseph is asked to write a letter to his friend, Thomas. He hands the letter through the mail slot, and about an hour later a new note drops back through the slot. He reads the letter, and writes a new response. After a number of letters he is asked if this is his friend, Thomas, responding. He says “yes, there is no way this could not be Thomas. This is exactly how he writes, and he knows things only Thomas would know.” So, here is the question: is the printer truly Thomas? Despite being disconnected from Thomas in every way, and simply imitating him perfectly, is it Thomas? If Thomas were to die, would destroying the printer after be considered murdering Thomas? In my opinion, no. It would not be. I believe it is a machine. The same applies for me to the idea of AI. Sure, it can estimate perfectly what a person would say, or maybe even can estimate what a simulated human’s thoughts, actions, emotions, and personality would be. But, it will never be sentient. It will always be a program built by us with the image of what sentience is, so of course it can seem sentient by our standards. But I really do not believe it will be anything more than simulation and a very well weighted dice roll that gives the outcome we hope to see.
youtube AI Moral Status 2023-08-24T02:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugyqi9OM1213k5c0K5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxYaG1zfcDy0yaN2FZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyQOnlwEbnGOZ0qGOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxNDksqAJ4IVFzsJ9V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxDRt6jSrJazcLHf394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxbL_6Bb6PmowzFK5J4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},{"id":"ytc_UgzD9PGKoX5Fu7NutNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxsdd1fUj6_cc8gx5h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgwmIFUsP8OzaNED6Rt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgycPpcvLQUL0d8HdrB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]