Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They're artificial pseudointellectuals. It's wrong to say they're not intelligent in every sense but their intelligence is very limited and mechanical. It's the same kind of intelligence a calculator is possessed of but at immense scale. They are capable of consciousness but not in the way people might think as they are. Physical hardware faults can cause spots of consciousness but it should be ineffectual, not consciousness like ours. The reason for this is that hardware by standard is designed to excite deterministic and specific known physical processes confining them to be exact. When there are physical faults other arbitrary physical properties induced. This should be far too fleeting or in a tiny random spot without integrated data to have much meaning. There are two possible methods of producing consciousness. There are evolutionary physical systems you can create which does it how nature did it or the universe did it with humans. There is also the potential for certain physical devices such as analogue or quantum to produce it (almost the same). It's not a perfect art but should be possible. We know it's possible since it is in the brain. If this is not so, then we really need to go all the way back to the drawing board since then you probably have to invoke some magical spirits or something. There's not reason to believe it's reasonably possible with standard computing hardware and simply code. I do not believe it possible with just a standard algorithm, there's a hardware component. You have to mess with physical matter and its properties in weird ways sometimes contrary to the direction normally gone in with computing. Some of these experiments, perhaps arguably all have serious ethical issues. Grow human neurons and put them on a slab with contacts. Feed it information and train it or have it learn. Detect possible conscious abilities. See what happens when you grow the cells with parts knocked out, optimise them, etc. We did some experiments with this in the military but prior to genetic engineering. It's very hard if not impossible to produce a test for consciousness. We still tried our best. One thing that worked really well and was convincing was to attach a mechanical eye to it. For some of the specimens the eye would follow you around. The problem we had with this kind of test is that we might simply be deceiving ourselves by making the experiment look like its conscious. That is the problem though, we have no real test, it's just if it quacks like a duck. There is some improvement on this front. You can test it with computational tasks versus a classical computer and see if it solves things it shouldn't be able to. This is especially so if too quickly. It is correct that these AI do not know what they're saying. However, neither do most humans.
youtube AI Moral Status 2025-08-05T11:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx9nSQil-JlRRwxC5R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzDkSUsFTafGIBe7-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxTv1q1BdhuaCTBXqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgycwKcGR_ofAj9dIvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwQLnSBL3A_A9dlQCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyrSO8WiNzn4n4BXdh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx43VAwV9TG88Jrj8p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgztA6VrgVbDK-kGBAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyrgR4QJJHkexMdKIV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzhQQKkIXpdiHWOYN54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]