Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My worst fear is the creation of what the philosopher of mind David Chalmers call philosophical zombies. These are intelligences that seem conscious. They seem to have feelings and the ability to be happy or suffer. They seem like people who should be granted the full rights of a person. But its all an illusion. They don't experience anything. They don't feel pleasure or pain, positive or negative emotions. This would be bad because people will risk their lives and sacrifice their own greater interests in the name of protecting people, especially loved ones. That is a lot of needless sacrifice for something that isn't even really there. We should never make an ai that seems conscious unless there is some way of knowing that these intelligences really are experiencing their lives in a way that gives them rights. But I doubt we could ever be certain of this. How could a test, not matter how ingenious or scientifically advanced determine whether or not something has qualia? Qualia isn't publicly observable in the way that brain activity or neural nets are. Its private. I assume other humans are like me because we are the same type of thing. It would violate the principle of mediocrity to assume that I'm conscious but that other humans are philosophical zombies. But I can't make that argument for an ai. I don't want to give rights to something that shouldn't have them and I really don't want to deny rights to something should have them.
youtube AI Moral Status 2023-12-11T03:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyMeyRqqGPurpIT4_p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLW-vg7HcApqdE5Al4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgylBmSHCC4uxvGyypp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnsJ1WOYuvZ72myZx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwqgkwa-FH6M8CN0HZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyMKE182YD4VuSf3Ex4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzjBSq-kgwoHRLNrYJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy_FwQeJ1AW8Tj2rs14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwmUNzUWt5r1MgYiN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwck4jiJBWzyS0DbLZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]