Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ajdndbdjbdj
I'm not lying. I asked ChatGPT: do you keep all the conversations …
ytr_UgzUiL7sZ…
G
It’s one thing to create AI art for the fun of it, but it’s quite another to cal…
ytc_UgwdnO_jV…
G
ICE uses it. Many police departments use it.
Hell, the NSA and GCHQ were hackin…
rdc_fvzhkh5
G
Banning technology does not, and can never, work. ESPECIALLY when the tools nece…
rdc_eu5vhtl
G
"Some" cases, not all
This is not something you can take an absolutist stance o…
ytc_UgxHDfB0z…
G
Yeah but in this case gemini ai refused to generate white people at all unless u…
ytr_UgxPXejnZ…
G
Imagine the economy as a game, if 1 to 2 players decide they will just hoarde al…
ytc_UgwOaimo2…
G
"Tesla Autopilot Crashes into Motorcycle Riders - Why?"
Because Elon despises t…
ytc_UgyKbfNww…
Comment
I'm not sure it will ever be possible to prove that a machine is or isn't "conscious" in that I agree with the article that we don't even have a particularly strong consensus on what being conscious actually means. About the only actually workable definition of it is "awake, aware, and responding to stimuli (i.e. being conscious is the opposite of being unconscious)" but people want to use the word to mean something else, and nobody seems to really know what that something else even is.
I think as a result a far better standard for us to work around is general intelligence. An agent that can think and reason about roughly any task, make plans and act upon them, deserves our consideration as a person. I think we should be very careful about creating such a machine because we don't really know what the safety or moral implications of doing so are. We could be making a slave, a friend, a benefactor or our own annihilator.
Is Google's chatbot a general intelligence? Not as far as I've heard. it's a sophisticated engine for responding to queries, but it doesn't appear to have an internal model of reality that allows it to make plans and do things it wasn't programmed to do.
reddit
AI Moral Status
1655294125.0
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_icg0n7o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_icfwvfn","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg0goj","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg04dc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"rdc_icg19wh","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"})