Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This CEO looks like a pedophile. Man created an AI. An artificial intelligence. …
ytc_Ugz1aigq7…
G
Am l concerned, yes. Do I think that they will put on the brakes, no. I am 65.…
ytc_UgwmaTKbs…
G
What a great scape goat for the elites in the depopulation agenda they can alwa…
ytc_UgxA0kH6B…
G
How can you say that something has feelings for instance ?
Robots run over progr…
ytc_UghEXl3Qj…
G
🥱 this looks like a beautifully crafted troll story and this artisan level troll…
ytc_UgyydPlod…
G
Also, a lot of people stop using it over time. My use of AI peaked several month…
ytr_UgyFS4M_W…
G
Earth is a prision run by the elites we the people need to take it back in every…
ytc_UgyeA5GWH…
G
@41-Haiku For some reason OpenAI is producing junk and desperately trying to m…
ytr_Ugzy2yuID…
Comment
I have a weird take on this, which is that it counts as being conscious when most people accept it as conscious. Right now we’ve gone through a huge leap where nobody would consider a computer interface as comparable to a conscious entity to a world where some appreciable percentage of people do believe AI is at a human consciousness level.
That number will slowly rise over time. Most likely experts and scientists will remain the last holdouts. It will probably be at least a couple of generations from now before AI being considered conscious gets critical mass, but it’ll happen eventually. For right now the interesting thing is the velocity of the shift, which tells you what a global social change we’re living through.
And I say this as somebody who has advanced degrees in biology and applied artificial intelligence.
Is there consensus on whether nonhuman primates are sentient? Can we agree that there are things AI does that are at a higher level than nonhuman primates? We can all agree that there’s a missing piece, but what is it? Is it self-determination? The ability to reproduce? Lots of good sci-fi asking these questions. If current AI isn’t sentient, what is the test we need it to pass? How would it be designed differently? What does a post-Turing-test framework look like?
I feel like the above questions are all more interesting than screaming into the void that current AI isn’t sentient.
reddit
AI Moral Status
1749795876.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mxiex1t","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mxigm8e","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_mxik36g","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_mxipwux","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_mxj5ayx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]