Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For a while now, I've been thinking of a potential way to test for consciousness. Sight is one of the most fundamental senses. Here, "sight" simply means "the ability to map out your surroundings, and to react accordingly." Most animals have functional eyes. Bats and others use hearing to map things. Point is, say we do not connect an AI to a visual-esque input (like a camera or a screen). We then can teach it to speak some language, say, English. Once it is fluent enough, we ask it what it sees. If it says "All I see is black," or something similar, that could be a good sign that it is not conscious, because that phrase could easily be generated from human speech. If, on the other hand, it says "I see nothing," that would be a sign that it is, in fact, conscious. Remember, we did not connect it to visual input, therefore a conscious being would literally see absolute nothingness. Not black, not white, nor anything anyone could ever imagine. If it sees absolutely nothing when stripped of input, I would call that consciousness.
youtube AI Moral Status 2023-08-21T23:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugx9WDRl8u9ekv3nZ_N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugw10aWZLBpTCAulJW14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugxk0XuJanYnvzCWqQV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgzOJvG-qPFFQU2n5dl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzEJ-O-rCQm6T2Qir54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgySjs147PSnrdbRJYV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgzTa4rOglO-az3DO1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzXOkXam5fcLp_6YPp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgxLAcuiVfX0ekhp3id4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwrgCd_LM57bJSbGBt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"})