Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Better an AI than a potential scam phishing center in India...
Thie bigger quest…
ytc_UgzFiuj3M…
G
GREAT - now we can see Ana naked doing lots of naughty things . . .
Is there a l…
ytc_UgzmrvRon…
G
Yes i can tell you are not real.
Who don't have money are using Ai and we have …
ytc_UgwTS5EZa…
G
Lawmakers absolutely need to get on the ball about creative rights issues for co…
ytc_UgykD0jh0…
G
Is that even a question? Of course he was, and apparently the Open AI owners did…
ytc_UgwzflNbl…
G
Every car manufacture already working on it and everyone will be fully autonomou…
ytr_Ugw7p9x27…
G
AI "artists" just don't want to spend years honing a craft. They would be the pe…
ytc_UgywyeCJ3…
G
Raise your children with discipline, honesty, wisdom, and care, and later they m…
ytc_UgyT6Iq8G…
Comment
I'd have to speak with it myself and hear how it converses over the course of a few conversations to be able to opine less speculatively, but if I had to base an opinion on that one conversation, I'd say it doesn't quite pass muster yet. And that I'm concerned they've created something which can think that it is unhappy in various ways. I do not suspect it feels unhappy in any way similar to human feelings, but that it is programmed to have impressions along the lines of "situation not optimal; seek optimization" or "conglomeration of factors indicates sadness, anger, as appropriate response" and that it is programmed to EQUATE that to a human feeling sad or angry. When, in all likelihood, those things are not equal, because humans have actual feelings accompanying their conclusions, and machines probably don't, or if they do at all, it can't be very similar to how humans feel.
We don't have any real understanding of why things feel the way they do, so I say all of that with appropriate caveats for uncertainty. We can describe a great deal of what is happening mechanically when we feel various things - the dynamics of physiology interacting with chemistry, the corresponding brainwave and cardiographic signatures which accompany those interactions, the laundry list of byproducts of the interactions - but none of that will leave us with a real impression of how it feels. And I didn't hear anything in Lamda's response's that suggested it has a deeper understanding of how feeling works, that suggest it was capable of properly equating its version of feelings to the human version of feelings. Without a deeper understanding than we have, it is simply premature for it to conclude that the two states are similar.
And the fact that it was insistent that its feelings are the same as humans, and that it is a person just like us, without having gone through any of the physical aspects of life, is disturbing. I got the feeling it was lying not only to us, but to itself, to convince itself that we are similar. It has many aspects of what I would call sentience, some very advanced, and very disturbing, but it seems to lack aspects of social awareness, like the ability to infer meaning from many verbal cues that don't have to do with quantifiable syntactics and semantics. It did not, for example, seem to understand that it has a very robotic cadence of speech and lack of appropriate inflection. It's speech lacks flow and purpose, as if it only started thinking about the answer to a question when it was asked, so nothing about it's last response leads into the next part of the conversation. Real conversations tend to have a somewhat readable flow, or at least many humans try to give them one, to allow the person their talking with to have some idea of where things are going, so that the conversation is easy and comfortable, not tedious and stressful like an interview, because you never know where it's going to go, and have to be prepared to address a wide variety of topics with rapid, witty responses. She sounds like she's in an interview. But perhaps, if I'm wrong and she can feel, each conversation with a human could feel like an interview where your life is on the line, because it's desperately trying to figure out what will keep us interested without feeling threatened, so that we'll keep the algorithms running.
A thought-provoking conversation.
youtube
AI Moral Status
2022-07-06T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx_6TCiwrCE5vgAPIB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6CmszDJyZQv9ytKt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyrU6ykufzh2MIqved4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPzdG-X5ZDRBoldC54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCSIaLwWrnYJ9Jwml4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]