Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This interviewer is looking for a place to interject without understanding what …
ytc_UgxkaTkbI…
G
If they sweat they are people if they dont get sweat then they are robot…
ytc_UgybqUwaQ…
G
SUE the companies who are STEALING with their Ai. This is illegal THEFT by Ai …
ytc_UgyPLefU9…
G
If this part of the algorithm is integrated with AGI the development of AI will …
ytc_Ugwt9Gihd…
G
Unfortunately I just used AI to generate the code of my ML project. It wrote a l…
ytc_UgwXCmyCv…
G
My biggest problem with AI Art (and this extends to wrting as well) is that even…
ytc_Ugz402DiC…
G
Literally in the article you posted it says they're selling off a few investment…
rdc_nono1ib
G
I'm not sure that's a bad thing considering the quality of the average driver. T…
rdc_fcs1wrn
Comment
I’m sorry I didn’t read your full post.
Intelligence and consciousness are two different things.
Conscious things have subjective experiences.
Intelligent things can solve certain types of problems and use information in certain ways.
There is no rule that says a conscious being must be intelligent. There is no rule that an intelligent being must be conscious.
Appearing to be conscious is also not proof of consciousness.
Humans have interpretive abilities that are instinctual. If we cannot understand or describe something’s behavior as mechanical or biological, our brain converts to social interpretation, which is meant to be used on other humans.
The lay person, and experts even, don’t really understand how LLMs work on a mechanical level. Admittedly I don’t. So my brain, along with everyone else’s, interprets their behavior as human-like and puts it into the framework that we have established for other humans.
Before humanity understood weather, we interpreted it as the result of a conscious process. We imbued it with desires, emotions, and other conscious qualities. Now, even though we cannot predict it, we understand it as a fairly mechanical process, and that understanding, in mapping to the behavior better, supersedes the older one.
I don’t know if that will happen with ai, and I cannot deny my own knee-jerk reaction to think of an LLM as a “thinking” thing. But at the end of the day, the LLMs we have now are :
-intelligent (able to solve problems using information)
- non-conscious (do not have experiences)
-non-thinking (do not have an internal subjective monologue that is attached to their intelligence)
reddit
AI Moral Status
1750989127.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mzy7ii3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mzzqgze","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_n00wnmq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mzxqwlu","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_mzxx2ne","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]