Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think consciousness would require the ability to self-reflect to some degree. I think this would require at the absolute bare minimum neural networks with layers that feed back in to each other with onboard learning. This would give these networks the ability to question their own choices (being aware of their own thoughts), learn from mistakes, question how others see them etc. This would be orders of magnitude slower and resource hungry than LLMs for most tasks however so I doubt we'll see impressive networks like this for another decade or so. For sapience another part of the puzzle I think will be having true multi-modal networks. It's amazing how much LLMs can do purely with text but a lot of the times they perform badly is they can't reference text descriptions against images/video/audio. Physical structures of the networks will also be important IMO, sure you can emulate anything in software if you have unlimited resources and time but we don't. The timing of signals could end up being vitally important. Perhaps signal resonance is really important and does something analogous to advanced version of Fourier optics for pattern recognition which can be done in pico-seconds with physical setups but take billions of times longer with software. As for creating the structure of a conscious brain with feelings etc the only way we know for sure can work is evolution. You could have simulated organisms with minds live out their lives in a complex simulated environment, face challenges and have survivors spread their genes. In real life this took quintillions of organisms and billions of years though so even with extreme optimization this would certainly be a very slow approach. For testing consciousness, feelings etc, we could do track activity levels in different regions of the network like doing an fMRI in neuroscience but with much easier and more accurate access. For evolved networks like mentioned above this should be able to at minimum detect whether they are honest about positive or negative feelings because we'd be able to compare it against the states where they should have evolved to feel fear, pain, pleasure etc.
youtube AI Moral Status 2023-08-22T12:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxpdRCAsrQyOriHhI14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9fHX4R_j6YWF75PJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyU_f_X2fK9q55xoyh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9TqwLwPEeoWbkAjB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOG-rFFJGd4EMhg7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgykR0cPkraInowh4BV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwM94IpKpsU7XZCTGF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwXGa3jw-m13Le8NAV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwuJfHJvIORtKXlYrl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwY5vAEaQTcxwQubll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]