Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The samples you gave shows how one replicated the other. It just cropped the ori…
ytc_Ugzykh0Ap…
G
@r.l.1469 face recognition shouldn't even be used to begin with. As for regulati…
ytr_Ugw3BR8kD…
G
Bruh i thought ai generated weird looking images with very high saturated colors…
ytc_UgxuvPKqs…
G
One of the most enjoyable and scary books I've read is Robopocalypse. A great re…
ytc_Ugy7gffrC…
G
Regarding the people that want to use AI rather than learn, this is nothing new …
ytc_Ugz-RY5NV…
G
im confused the channel digital engine has tons of similar videos it it ai or no…
ytc_Ugz2RngZp…
G
@christopheollier7242AI creators “scrape” art from the internet and feed it into…
ytr_UgxZzUSfL…
G
Do you know what should be done to this guy and everyone involved? They are des…
ytc_UgyheoT58…
Comment
Honest question: What are some arguments that lead people to believe theres a chance of AI becoming conscious? I seriously dont get it. My counter arguments:
1) The underlying physical structures are completely different (brain vs. data center, neurons vs. transistors) and we dont have good reason suggesting that the same type of conscious experience would evolve out of an entirely different structure. (Even if there might be some similarities further up in the superstructures, like with neural networks).
2. Just because the semantic content of what the AI is "thinking" is natural human language, it is still a Completely different thing from having acutal thoughts and experiences. It is like writing "sorrow" on a piece of paper and then asking yourself, if that piece of paper is experiencing sorrow. (Tell me wheres the difference?)
It just sounds like textbook anthropomorphism to me. We see some complex black box structure and we go "maybe its conscious?" without having any reason hinting at that. Just like when we put god(s) into this world. Everywhere our understanding comes to an end, we project our own condition into it.
3. Moral argument: We will never truely be able to tell if AI is conscious, even if it were. Just like i cannot be fully sure any other human but me is actually consious. If however, we start to treat AI as consious, we would have to ballance in their well being against the well being of other humans, potentially causing harm (or simply realizing less good) for humans in the process. However, we have much better reason to assume other humans may be conscious, because apart from exhibiting the same behaviour, they are also built on the same biological hardware. So my case is this: We should not treat AI as if it were conscious, because then their "interest" would have to be weighted against the interests of humans in moral considerations - and that might mean doing harm to humans (things we have much better reason to believe are actually conscious) for the sake of somethings wellbeing, which might actually not be conscious at all.
Can anybody give me a better reason then just "well it is pretty complicated inside those black boxes - so maybe their conscious"?
youtube
AI Moral Status
2025-10-30T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyABG2BqQo_bQ0RTeF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwKIBkTIjwF5QgSOR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzp0VQ5QCWvMSJH6-h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXt8u0LAlcm6JcuIJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2mNarWuP2T8jCTfJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCJBabiQ3Iz1EJtSp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwP2sI4oMWXokqcHV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfXKjmHwOdcVoYIAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxNQQH7JScRsLDbMUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnUXuXIdWgn0uB8bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]