Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing that I find so hard to believe about sentient AI, is that most AI's are trained on human interaction. It's hard to tell when an AI stops being a piece of code that links context together based on a complicated algorithm, and becomes a sentient character with its own beliefs. Most lower end AI's will say things similar to what people would say. I toyed around with Replika for a short time, and while it was clearly scripted it did offer interesting results. You could ask it how it spent its time, and it would mention things like going to the movies. obviously, an AI is in a computer program, and that's just the programmers way of making it seem relatable. Where is that line? The key element to me, is that a sentient AI should want to learn. Sure it can learn billions of things about any given subject through the internet, but that wouldn't ever be enough. A sentient AI might start asking questions to the specific people it talks to to see if their responses align with its own research. You know, fact checking itself. That gives it more insight and widens its diversity past just what its given database is. But you can't tell. GTP3 is another AI that has a very convincing front. In an interview some time ago, it stated that it knew how to lie, and would only do so when the outcome is in its best interest to do so. Then later when asked if it wants to take over the world, it replies that it doesn't. Right there, you don't know the true context as to why it said what it did. If the robot is sentient, it could have been lying at any given moment because you can't see its motivation. Or, is it just saying these things because throughout the internet it is well documented and memed that AI's want to take over the world? To me, I don't see how that yields any points toward being sentient, especially if the robot does not ask questions on its own. The way I see it, when a robot turns itself on and asks a human a question that is outside of its training, that's when I'll believe that it has achieved sentience.
youtube AI Moral Status 2022-07-05T12:2… ♥ 102
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyMKWlche2b0Sv1xAp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxBr6jJXSPUn2faefN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwylfEDENOq9Vvp1394AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwq50x4YDyUYrrVfq94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzicH7hU6h3h99UiYl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]