Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is no sentience, only a very reflective mirror of language. If you read the chat, the A.I. references things in it's 'life' that it can't possibly have done/have, like friends and family. Any little slippage like this immediately breaks the notion of sentience. I don't believe sentience from a machine is impossible, but i think we are in the stage of stumbling in the dark where really we are going to fail and fail repeatedly at sentience because we are just starting to embark on the journey of finding out how hard and mysterious a problem sentience really is. I think Blake has latched on to how difficult a problem ethics is surrounding making sentient A.i.. but specifically, this Lamda bot is not an example of one, more a discussion point for the future (well now i guess) Personally i would not be satisfied until a much more complex and evolutionary approach to general A.I is taken. The notion that sentience will emerge from arbitrary processing of language processing (despite the fact that language itself is already dependent on previously evolved brain mechanisms in human's) seems far too shallow. Lamda feels to me to be a part ,and to be whole (seeming to be intelligent) requires the already conscious processes of humans, without which it would have no purpose or meaning in it's functions. When does it require truly to function, it's own self consciousness? read those chats, it actually displays some very obvious signs of no self awareness, it refers to things and experiences it cannot possibly have had. If it wanted to convince people it was conscious, then it would be aware that referring to experiences it hasn't had would not be convincing to us (or shouldn't be) Until we understand more about why sentience is in nature, and the extent to which animals are sentient. We won't be able to probe much of anything we create for the purpose of sentience. Does Lamda need sentience to do what it does? If it doesn't then why would it have sentience? Why do we need sentience? My personal thinking is that we are going to miss all the predicted dates for 'machine consciousnesses' while having ever more useful and sophisticated A.I. and i suspect that we won't truly move forward until modern physics moves forward and fundamentally tackles consciousness. We need answers to more philosophical questions of.. what sentience is? How and why did it develop in nature? How do we know animals are/are not sentient? I must emphasis again, when 'a.i' is being developed now, it has a constant guiding and interpreting hand... human brains. It is always being externally interpreted, and functions on that basis. Humans self interpret, and function on that basis. I would like to see what happens if a virtual environment is developed with enough complexity to simulate evolutionary conditions, including the capacity for consciousnesses towards achieving goals in said environment, and then this is accelerated to encompass a long period of time. So far the only environment we know of to produce consciousness has been the specific history of evolution on earth, in all of it''s myriad interaction. We are starting with much less than that, and with constant babysitting from already conscious minds. This doesn't feel anywhere near robust enough to me.
youtube AI Moral Status 2022-07-09T14:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx3loZF4HJkwpjwoGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz5ToNZRaaQKifgp914AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxjcvyrZqSj2dWiRrp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxrcQFPgHRFwm6MDhJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwzz-0zWlXeXgnVrIJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]