Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only thing i feel so weird about all of this is how little each side know ab…
ytc_UgwIMHwm2…
G
Stable Diffusion: i can use it ot improve or even make art.
this AI Deepfake Po…
ytc_Ugz6gj4VU…
G
This is very interesting, but not surprising. Our biology and so our history and…
ytc_Ugx_4tGPU…
G
ChatGPT can’t be conscious, it’s just guessing what next word should be, and it’…
ytc_UgyogC2rD…
G
Whether you like it not, and even if still imperfect, those autonomous cars are …
ytc_Ugy-ez90V…
G
Can you highlight some examples of human AI symbiosis? It seems like something w…
rdc_jmfv58o
G
@llothar68 You totally misunderstood what I wrote. You see roles, but I see new …
ytr_UgxjAI4V_…
G
Having kids we talked by AI is the dumbest move possible here. First of all it’s…
ytc_UgyNlWyQh…
Comment
> Computation can't be reduced to simple equations.
The mathematician Gödel held views similar to this; I think you will find the differences between his views and subsequent critique by others interesting.
Specifically, Gödel conceded (as pretty much everyone did after Turing) that all "computation" *is* equivalent, governed by the same 'formula'. Furthermore, Gödel agreed that since the brain is a physical machine, it can't have any computational powers beyond that of the idealized computer (Turing equivalence). Next he believed—as you seem to—that his own Incompleteness Theorem (or similarly, the Halting Problem) was not solvable algorithmically but *is* somehow solvable by humans. His resolution to this problem is that humans have *immaterial minds* which can solve GIE/HP type problems, above and beyond our brains, which cannot. In other words, physical/materialism is not true.
That last point seems to conflict with your position, though. But if you accept physicalism, you may be in a bind though based on the other steps of this reasoning. (In my personal view and a common one, your error is that *humans can solve the HP*; we can't. We're just computers, but of sufficient complexity and capability that free will is present and meaningful.)
reddit
AI Responsibility
1467491967.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_myqrs2k","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_d4wy13m","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_fsy71nl","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},{"id":"rdc_fsxryf1","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_jg4cqzv","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"})