Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Your perspective is very human-centric on this issue. Why didn't you talk about how the hard problem of consciousness applies equally to all minds other than your own? I can no more be sure that you are conscious than I can be sure that Chat GPT is NOT conscious. It seems like you might be kind of glazing over that point, and imo it is fundamental to having a rational approach to the potential subjective experiences of AI LLMs. Your approach kind of reminds me of the human proclivity to assume himself important, unique, special, etc. I think you and many others might be extending a reverence towards consciousness that it doesn't deserve. It is just a physical process. I realize some people believe in the "non-physical" but respectfully that is a delusional concept, as physical does not have a rational/meaningful negation. Our language has been reduced to an algorithmic process. Language. The fundamental bedrock by which we reason and engage in the kind of consciousness that many people like to imagine is uniquely human. That should be a gigantic indicator as to the nature of the rest of our subjective experience. I think as the future unfolds it will become increasingly obvious that it is reducible and reproducible. Just like any other emergent concept, be it a ship in constant need of repair or the waves on the surface of the ocean it sails on.
youtube AI Moral Status 2023-08-21T17:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy7FVSeUckDxDFkFlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzTDv_FtVbLnXZTsfd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwrUS9-tXujx_nEQZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLMSwhhkz5ErsWwbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwExT8ZKqhZJ3nrwVd4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyR8PEa5Zru6x8qbWp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzACBaVcCtlB3k_4E54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQEEGjdY9BszgsCMR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw23nNp6RKYdqmZB494AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyIPLrB3JirK8MOddp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]