Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I thought about artificial intelligence and consciousness a lot, and I've boiled it down to one simple question, if it fears death it's conscious. Although it's interesting because right now with our current technology and limitations as well as our not knowing what consciousness is, it changes. Currently I feel like true A.I with consciousness is impossible, we don't even understand our own, how would we create and understand a robots one then, and if we ever do figure out what consciousness is, the old question arises, if you clone your exact brain and everything, is that clone still you? Because if a robot's consciousness is a bunch of 1s and 0s, that would imply we can copy it, save it, and put it in another unit. At that point the A.I. doesn't have to fear death, because if it ever gets damaged, even its main chip or whatever, we have a backup somewhere safe, we can turn the A.I. off, and restore the backup, but then again, is that a new A.I., or the old one, does the old one die and a new one is born that behaves and has the same exact memories as the old one, or is his consciousness transferred? It's the same question we end up when we ask ourselves about replacing things, or cloning. If we manage to figure out how to save and upload our consciousness into a computer, and I upload mine, while I'm still alive, does that break the laws of physics? Because the same "thing" is in two placed at the same time, and does that mean I should be seeing through 2 set of eyes, the computer's vision and mine? I fear that consciousness is a lot more fragile than we think, a bunch of neurons firing in specific patters to create our thoughts, emotions and memories, and as soon as that firing is cut off, it's not us anymore.
youtube AI Moral Status 2017-08-07T00:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzdc5ggMEKwzkcZ07V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1YurAqjrGaCWv-MV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzuhpMbmFcwc1EwNSt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugwo7z4sOrI-LDRBRTN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugxy1uFJLiB-VO4r-Fp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzUoGaTpCJ_sGD8lHB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzGQ2081wUKl_7ojaV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzx0n3rUBZhenf_JPh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugja24tjkz6vPHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxkgXw-9xAY0JKPR2x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]