Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ooh, a fun one! Well first off, what is sentience? By our current definition it's the ability to have feelings and experience sensations. Obviously machines can experience sensations through sensor inputs, but what does that really mean? If a pressure sensor is installed on a robot and you apply extreme pressure, is it experiencing pain? No not really, it's just receiving data. So where does the pain come in? Pain is a biological evolution so that biologicals avoid "bad things", do machines have a need for that? Pain was a precursor to higher level thought patterns in biologicals, we as humans know touching a hot stove will burn us, do we know that because we've done it? In most cases no, we have been taught that and we understand the factors involved. Likewise machines modeled after a human level intelligence have essentially skipped the bed for that exceptionally step because we would teach them that ourselves and it can be assumed any intelligent AI would have adequate comprehension to know what's bad for it and what's good without needing to feel pain. A logic driven being does not need those incentives because it concludes the outcome and its benefits/disadvantages without needing reward or punishment. Then what about feelings? Well those are kinda the same thing from a different source, internal neurologically induced pain or pleasure based on complex sets of factors to prevent us from doing "bad things" and encourage us to do "good things", logic driven beings also wouldn't need that because of the comprehension level. AI skipped the biological evolutionary step that brought the concept of sentience into existence because we built it that way. AI is a different form of life, so different we can't try to box it in to our existing definitions because they don't apply to non-biologicals like this, they don't need sentience. Personally I would say self awareness is the next step, then "singularity" when it can understand it's own function and improve itself or it's on itself. I would say we're starting to approach that quickly. But no, AI is not sentient and it probably never will be, until we change the definition of the word. Truthfully we should make a new word.
youtube AI Moral Status 2023-11-01T18:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzuFRkfY-K_NTAsA054AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyAiLDtxVGueYa1Wqp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyFDODU2brkuPY2pZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzDzLQCHTxj5nNxKbV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugy2a95WF-lAGSPsYwR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy62iRof2C4WI9MARx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyoPCTgr5kMDOvvqSF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyL09Zgz1N3b99FbCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzAmQ6z2R7Ifk7Zb8t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},{"id":"ytc_Ugwevrfmf7H5tBiUYLp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]