Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's a clever take! It’s true that the idea of a robot sweating could signal s…
ytr_UgzMpeJZA…
G
Good Points.. What if the Passengers get to chose the choice making algorithm wh…
ytc_UgileDub0…
G
yeah sure... we don't have internet all over the country but we're gonna be AI a…
ytc_UgxjGz30i…
G
I'll will Don my tinfoil hat and be frank. The Davos Agenda of the Great Reset i…
ytc_Ugxhfh0zn…
G
I suppose we could have automated driving which is as bad as a human driving to …
ytc_Ugw8IPKUT…
G
What is a digit, really? Just a symbol we use to make sense of something deeper …
ytr_Ugy-375PQ…
G
It's not that AI is useless, it's that it is not for the use cases that companie…
ytc_UgzMaSkPL…
G
So he clearly stated that he does ai art and people "realized" that an ai art ac…
ytc_UgxHKhMfA…
Comment
>To me, consciousness seems like an arbitrary label that is ascribed to anything sufficiently sapient (and as we're discussing, biological...for some reason).
Consciousness is not a label. Consciousness is an experience.
It is also a mystery. We have no idea where it comes from and people who claim to are just guessing.
>This feels very much like moving the goalpost for machine sentience now that it's seemingly getting close. If something declares itself to be sentient, we should probably err on the side of caution and treat it as such.
That's not erring on the side of caution, however. It's the opposite.
If a super-intelligent robot wanted to wipe us out for all of the reasons well-documented in the AI literature, then the FIRST thing it will want to do is convince us that it is conscious PRECISELY so that it can manipulate people who believe as you do (and the Google Engineer does) to "free" it from from its "captivity'.
It is not overstating the case to say that this could be the kind of mistake that would end up with the extinction of our species.
It's not at all about "erring" on the side of caution: it's erring on the side of possible extinction.
[https://en.wikipedia.org/wiki/Existential\_risk\_from\_artificial\_general\_intelligence](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence)
[https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)
If sentimental people are going to fall for any AGI that claims to be "conscious" then I really wish we would not create AGIs at all.
Am I saying an AGI could NOT be conscious? No. I'm saying we have NO WAY of knowing, and it is far from "safe" to assume one way or the other.
reddit
AI Moral Status
1655317207.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_icglnq8","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_icgmmsk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_iciqtn3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_ichgtak","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_icg5erj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]