Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tesla’s Full Self-Driving (FSD) Supervised is not yet available in Romania as of…
ytr_Ugw8oU5uC…
G
So was it AI reporting coming from NBC News over the 50 years? Just like AI it i…
ytc_UgwkTx-W8…
G
I wish we could just give a rough sketch and have the AI complete it instead of …
ytc_UgzjFdurl…
G
For those who think ai can NEVER replace programmers. Just try chatgpt. Yes it c…
ytc_UgzxD8oCb…
G
You did the example of small jobs disappearing because of new inventions. The di…
ytc_UgyPXYRPO…
G
it's quite easy to tell that it's AI ig. Not because she looks way too perfect b…
ytc_UgyUw4iSH…
G
U use ai to think you are smart we actually learn and we are actually smart we a…
ytr_UgwOFhXWe…
G
This may blow some peoples minds but some of us enjoy driving why the hell would…
ytc_Ugy8njEFu…
Comment
*Hypothetically* your assertion of there being a paradox is sound, but..
>a human being is, by near-universal consensus, a full agent
**1.** This is false; not all humans are full agents; it's no-where close to being true in by any practical stretch. "Near-universal consensus" is extremely rhetorical, and given the fact that this write-up is in some part a product of an LLM, you as a user/author are falling victim to temptation offered by its assertions on this point alone. While it's useful to take advantage of LLMs to make/revise/reversion our philosophical arguments and assertions, this is glaring faux pas, not in touch human society, culture or "consensus" ***of any kind***. Although it does impart a touch of 'humanity' it is working purely on pathos, and not broader reasoning - eg. legal precedent - which I'll touch on later.
>I further argue that this prior question cannot yet be properly posed because of what I call the Semiotic Problem.
**2.** You and the LLM are failing to properly pose what/where the Semiotic Problem is exactly. You list some examples, but they are largely disparate, erratic and spurious if not entirely dubious, all together. For example, 'we' - of whatever qualifications - are more interested in what distinguishes an entity from being any other piece of computer code, rather than that of a robot. Robots in general are less subject to scrutiny than code is, even if asserting that robots are plausible entities is easier than asserting terminal script or code compiled into machine language is. To confuse this notion is to be *ignoratio elenchi* of John von Neumann's work. One does not necessarily need to read his work to proceed forward with some philosophy of modern electronics, but it may be pre-requisite where conceptual difficulties are easily recognized if commensurate intellectual material cannot be ascertained and understood. That is, "robots", as the semiotics would properly extend itself, do not necessarily have the abilit
reddit
AI Moral Status
1775204894.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_oe4apgm","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe1c25i","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"rdc_oe7mbdf","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe7rqc3","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"},{"id":"rdc_oe1ivlw","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]