Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stupid electric cars and robot put Tesla out business robots are going two take …
ytc_UgwGbmdnZ…
G
i don't understand what the threat is? it's not like ai is a species that exists…
ytc_UgzT_6lUv…
G
I've said this before and I'll say it again.
AI is antithetical to the human e…
ytc_UgwkGGJrU…
G
@minhaddock9593 Using AI as an artist? You're a bad apple yourself. What's to ma…
ytr_UgyFpEukK…
G
I don’t know if anyone will see this but I agree with ChatGPT. A hotdog is its o…
ytc_Ugxl5xUrw…
G
And this is their greatest weakness. You can create robot minds, but innovation …
ytr_Ugz42kIJw…
G
Long answer: The real issue, like always, is power addiction. The Pareto Princip…
ytr_Ugww2vECR…
G
It’s all about profits and control. They know the dangers of AI but they don’t c…
ytc_UgxWr4jiu…
Comment
Is it just me, or is the OP just having difficulty understanding what is meant by "hard problem"?
Here's how it is possible to know both how it is possible to know whether an AI is conscious, and whether it is: it isn't. Neither question can ever be solved, and also no AI can ever be conscious.
This declaration will be dismissed with derision, I'm sure. I cannot know, supposedly, that an advanced information program running on a computer can't ever be conscious, I am simply proclaiming it as if I was the font of all knowledge, the neopostmodernist would say, because that is how neopostmodernists have been trained to respond. The assumption (far less valid than the idea that only white swans are called swans) in neopostmodernism is that our brains are computers, that their function and effect is to calculate; there may be different physical/mathematical mechanisms involved, but no matter what neurochemical activity correlates (I don't say "causes", though physicalism is far more undeniable than either consciousness or swans color) with conscious thought can be accurately modeled with the mathematical process of a neural net. This assumption gets more and more tangled as engineers and researchers construct computer systems based on the "neural network" model inspired by organic neurology, with each iteration further cementing the "proof" that our brains are essentially and effectively computers, which process sense data algorithmically just as a robot we design would do.
So the question morphs from "if you had only seen a single swan and it was white, how strong would your inference that *all* swans are white be? into "if *all* swans ever seen were white, how accurate would an inference that the next swan you see will be white?" But the answer remains the same: the problem of induction is a *hard problem*, just as consciousness is a hard problem. A trillion white swans cannot prove that all swans are white, not even an infinite number can, logically. So we are lef
reddit
AI Moral Status
1655296493.0
♥ -1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_icip8od","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"rdc_icg4ou2","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"rdc_ich04w6","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_icgw4jo","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_icgs2jv","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"skepticism"}
]