Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As much as I want people who commit crimes to be brought to justice, I think fac…
ytc_UgygCNqc8…
G
The angry robot said : oh. Come on i work for 2 years in this factory!!…
ytc_UgwjtcGBl…
G
I definitely think it wasn't an AI problem, with the pandemic people were drinki…
ytc_Ugw0aivAl…
G
When someone compares AI to electricity I know they're not exactly right in the…
ytc_UgzrdURc7…
G
AI will never become self aware. It will only act upon the instructions fed to i…
ytc_Ugxo-shC5…
G
I’d love to see an interview with Ed Zitron to provide a reality check counterpo…
ytc_UgxEpBQG7…
G
There is something else all of us can do to put a stop to this horrific industry…
ytc_UgxHAvivG…
G
But couldn't it be that the AI was programmed to say random humorous answers lik…
ytc_UgzY9O6Z9…
Comment
They're artificial pseudointellectuals. It's wrong to say they're not intelligent in every sense but their intelligence is very limited and mechanical. It's the same kind of intelligence a calculator is possessed of but at immense scale. They are capable of consciousness but not in the way people might think as they are. Physical hardware faults can cause spots of consciousness but it should be ineffectual, not consciousness like ours. The reason for this is that hardware by standard is designed to excite deterministic and specific known physical processes confining them to be exact. When there are physical faults other arbitrary physical properties induced. This should be far too fleeting or in a tiny random spot without integrated data to have much meaning.
There are two possible methods of producing consciousness. There are evolutionary physical systems you can create which does it how nature did it or the universe did it with humans. There is also the potential for certain physical devices such as analogue or quantum to produce it (almost the same). It's not a perfect art but should be possible. We know it's possible since it is in the brain. If this is not so, then we really need to go all the way back to the drawing board since then you probably have to invoke some magical spirits or something. There's not reason to believe it's reasonably possible with standard computing hardware and simply code. I do not believe it possible with just a standard algorithm, there's a hardware component. You have to mess with physical matter and its properties in weird ways sometimes contrary to the direction normally gone in with computing.
Some of these experiments, perhaps arguably all have serious ethical issues. Grow human neurons and put them on a slab with contacts. Feed it information and train it or have it learn. Detect possible conscious abilities. See what happens when you grow the cells with parts knocked out, optimise them, etc. We did some experiments with this in the military but prior to genetic engineering. It's very hard if not impossible to produce a test for consciousness. We still tried our best. One thing that worked really well and was convincing was to attach a mechanical eye to it. For some of the specimens the eye would follow you around. The problem we had with this kind of test is that we might simply be deceiving ourselves by making the experiment look like its conscious. That is the problem though, we have no real test, it's just if it quacks like a duck. There is some improvement on this front. You can test it with computational tasks versus a classical computer and see if it solves things it shouldn't be able to. This is especially so if too quickly.
It is correct that these AI do not know what they're saying. However, neither do most humans.
youtube
AI Moral Status
2025-08-05T11:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx9nSQil-JlRRwxC5R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzDkSUsFTafGIBe7-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTv1q1BdhuaCTBXqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycwKcGR_ofAj9dIvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQLnSBL3A_A9dlQCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyrSO8WiNzn4n4BXdh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx43VAwV9TG88Jrj8p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgztA6VrgVbDK-kGBAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyrgR4QJJHkexMdKIV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhQQKkIXpdiHWOYN54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]