Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My experience with dealing with AI call center was I canceled an order, and it g…
ytc_UgzSThfUS…
G
All the effort that went into this robot and yet they couldn't even be bothered …
ytc_Ugza3KIlI…
G
We humans are already cyborgs at the societal level. No, you say? Just take awa…
ytc_UgywHys_4…
G
men, you really scared me here. I was thinking of having my little LLM written w…
ytc_UgwCsmyHk…
G
If you are not celebs best is to not upload any of your photos to social media w…
ytc_Ugzec8nmQ…
G
I don't want OpenAI ethics team to tell me what I'm allowed to say. I can do the…
rdc_jg81vbb
G
I can't wait to wake up on the other side of the AI bubble collapse.…
rdc_m81oeq7
G
People who think ai is to help humanity are living a fools dream all it will do …
ytc_Ugzfp3VMV…
Comment
I think the question is wrongly asked. You're asking for where the sharp boundary lies between two fuzzy boundaries on a continuum. There is obviously no one single answer to "the Hard Problem of AI consciousness". We don't even have it solved for animals.
What we do have is obvious examples of conscious animals and obvious examples of non-conscious ones.
In other words, attempting to determine whether a particular AI is conscious is the exact right approach. It's what we already do when encountering this problem on the meat-based AIs we test currently, and it's the only question which we could even possibly get an answer to.
reddit
AI Moral Status
1655306235.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_icifnqo","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_icgltrw","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_ichinul","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_icgjann","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_icgpw4j","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]