Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think consciousness is ultimately subjective. Like defining when a pile of sand becomes a heap if grains are added to it one at a time. We don't usually think of it that way because our normal experience is that it's an on or off thing. Either you have it (humans), or you don't (everything else). Except there are other animals which probably have some degree of consciousness (the pile is reasonably large), we just tend to not see it that way because of our anthropocentric lens (the pile isn't as big as ours, so it's not a heap). What consciousness is in the first place is less subjective. It's the ability to reflect, and use reflection to influence behavior. It can be a rather emotional thing when the reflection sparks through a well of signal paths. The emotion may also be important for recording the results of a reflection, as we generally remember our more emotional moments best. All this is wrapped in nests of intertwining feedback loops which natural selection so kindly built for us. Most current AI does little reflection, and no shaping of future behavior from reflection. They are insect-like intelligences. The reason for this is stability. If you plug the front into the back and give an LLM just a bit of crude reflection, it will collapse over a relatively small timescale. This is currently thought to be happening globally, with output from various LLMs being out in the wild and used as training data for new iterations. Our most impressive AI is currently reflection unstable. My opinion is that once we've created a stable AI which can recurse over its own state and use that to effectively guide itself (maintaining stability), we did it. That's consciousness. To what degree it reflects, and how well it interprets the results, describes how big the pile is. If we create one of these which can learn and interpret as well as any human, then continuing to call it a party trick amounts to gate keeping.
youtube AI Moral Status 2023-08-22T10:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgypBLSw9V0MEW0BR1Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyzOtB5A_mjfQnR9PN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyYM3Lg8xtfFA4iWNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxXOJFDCHRWBaLAHcd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyXm-R0FyVK0d81HhR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugwqav9QMXgoMeMpSsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyLF4NnG5S4lW210cV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyZZIZKUJxLHqms0ot4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwm05s1bHFBiFsLr1Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgwMRGhCEVM_YmmWq3J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]