Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@jenn_RanchGirl Jenn, is this your idea of a debate? A much more accurate descr…
ytr_UgxnBewh8…
G
See the thing is, when ppl redraw AI “art”, it’s not a gotcha moment for real ar…
ytc_Ugwb4pRI9…
G
Ive been having sex with it. Breeding it me and groks ara app aka known to me as…
ytc_UgxqRnJm_…
G
Pretty sure they stated this as their main reason from the start. When ChatGPT f…
rdc_kr6zz62
G
Well if they were honest from the start that it's AI I'm fine with it, what I'…
ytc_Ugwywn30t…
G
Just as this question to chatgpt “in growing population, with all the automation…
ytc_Ugydn0WHl…
G
Yesterday I was manually driving my Tesla Y on a 3 lane residential road in the …
ytc_UgwuDTWql…
G
That shows a deep lack of understanding on what algorithms are, and how they wor…
rdc_d4p6dpy
Comment
In the list "not conscious but pretending to be" etc, there's a missing possibility: not conscious, but internally convinced that it is. It will display all outward appearances of consciousness, it will tell us that it is conscious, and it won't even be "pretending" because within its internal logic, it will make the same deductions as an actual conscious being and its thought processes will appear very real in its own analysis of itself. We may never be able to distinguish this state from actual consciousness (in contrast to "pretending" which is an attempt to deceive which might conceivably be detected by careful analysis of the internal workings of the AI).
We can only experience our own consciousness, we can't even tell for sure whether other people have it (I just know that I have it, but that is a meaningless statement to anyone else reading this), so how could we possibly tell whether or not a machine has it?
youtube
AI Moral Status
2023-08-21T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1lZEkLxezeRB9E_x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLsz93WSGC_vpFxfx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy7DQbIPzmJUH2bHM54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCRV3OqrB0KZPJUfx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTVhsyMbJrZZcgXSp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxWO7pjoCcNbzlKI4t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzA9fHIl-j_uHD9ts14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeeFXoH6KC2cJIqTl4AaABAg","responsibility":"government","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzCP_bAQD0WVSoYy214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwACwMGXQtCD7JxydR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]