Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Pretty interesting video I must say, but as someone who is learning Machine Learning and practicing it, I would like to give my own thoughts. First for an AI to have "consciousness" or have something very close to how the human brain work (say a node in a neural network, which is just a oversimplification of a complex neuron), we need to first understand some things that it should be able to do before it is on a human-level. 1) Thinking: Current AIs do not think the way humans do, and that's a huge flaw. For an AI to think, it needs to have some internal input, like an internal state. Think about when we process some information that we came up with in our mind, that's an example of a thought. An AI only takes external input (a prompt) and processes it and gives an output. It cannot come up with new ideas with the ideas matching patterns in its dataset, which brings us to another issue. 2) Dataset limitation: For an AI, say like ChatGPT to generate a sophisticated response, it needs to have a lot of examples (sentences, formulas etc.) Which it utilizes and goes through and blends them and comes up with an output based on a dataset. ChatGPT contains a lot of data, but bringing it into the real world may have different turns. An AI should also *learn* from the real world and experience, not just rely on its massive dataset, which is something Liquid Neural Networks are pushing to. 3) What does it mean to be "sentient": I get many people say "sentience" is subjective to many, but hear me out. It's the ability to have a subjective experience. If the AI has complete autonomy, able to come up with its own thoughts, ideas and opinions, that could mean that the AI itself is sentient. It seems sentient is a property emerging from a complex system (like neurons in our brains). Now since all of these might seem very difficult to tackle and deal with. I would say perhaps an emulation of the human brain might grant a computer an ability to everything we have and do. Say we upload our mind (mind uploading) to a computer and let it figure out how it works and emulate it, then we are talking about a true human-level intelligence.
youtube AI Moral Status 2024-07-15T02:2… ♥ 9
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyMtz-nKmiRQLnJEhR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxol4jHZz1GQ2W_Vwx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzCMUnLQZxVPuXqOB54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1UM0c4OFgNFaVyI94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzg9XXnx_7tVSXduZ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy50w8SFZBo0xrKE9F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyk47DuX4c2x7mqWk14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzpVPzYCh4yCCJxBKJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyqC2FCOgYosoYXyiB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx9MH2vFjgaF532_v14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]