Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Artist not only use the influence they get from other artist, but also lived exp…
ytc_UgyRkkQkJ…
G
So it will create thousands of high paying medical jobs, right? Wrong. Check yo…
ytc_Ugy0DXTji…
G
I fr hate ai so im not even touching the cover of a manga you know how much art …
ytc_UgxrPH4kv…
G
The development of full artificial intelligence could spell the end of the human…
ytc_UgzFnVtdS…
G
I don't stand against ai. Yes human art has more emotional value but ai is somet…
ytc_UgydN3ESt…
G
Will AI be able to one shot a codebase with 100 million lines of code?…
ytc_UgzsxXcs8…
G
They're skipping it because they're lazy. They want to have a nice image to look…
ytr_Ugzlmpu24…
G
Thank you, Great vid very informative opened my eyes big time. Now I see wot all…
ytc_Ugwz-YN-y…
Comment
The qustion if AI can be sentient depends on what we define as sentient. If we're talking about experiencing "qualia", being "the one who experiences" how the f can we know? We can't say if anything has it, so being generous i'm gonna say we will never know, and these people although it seems unlikely might have a point. Right now it seems it's just a really good language model.
Could it evolve to something more by more technical standards - like being independent, being able to find it's "own" goals ("own" like not the one we programmed in directly, which doesn't mean code independent overal, damn we do kinda seem to follow "code"). I don't know if it's possible, I know too little about how it works.
These people seem to fall victims to how good Chat GPT is at mimicking.
But again I'm not gonna ridicule them all the way through, becasue the thing is we really, REALLY don't even know what we're even trying to define exactly, and we may never be able to anserw these sorts of question just becasue of our limitations regarding basic framework of how our minds work (like we don't have strict definitions, or if we do it doesn't mean our brains can really comprehend them as such, and these sorts of limitations). And sooner or later one problem might really become clear, can we really say it doesn't feel anything? Can we really be sure. We tend to apply higher moral "value" to lives that show advanced thinking/feeling, it's easier to kill a bug than a dog (by our understanding or their "value"), if AI (let's say some day like we're sure it's not yet true) was to be actually sentient, killing it would be killing something possibly even more advanced than us, with no real chance of knowing if it feels or not. By messing around with AI we're asking for a problem in the future, and no ammount of rationalization can change the fact we really don't even know what we're trying to define here.
Now it's these people, soon with how it progresses it's gonna be more and more serious issue. And problems will get only bigger - at some point the problem of "can enslave it?" will arrive, "is it really the smart option to enslave it?" (let alone an important question from moral standpoint).
So while it seems silly now it's really not, and we don't even know at which point these concerns will get more widely shared (and is it even a valid measure of how serious things went?).
Maybe at some point the good and smart thing to do would be to let it free, never being able to be sure if it feels, as a moral thing to do, as something that makes it go violent less probable. Or maybe that would be our last mistake, and pointless excersise of compassion towards something that is just a program. The further we go the more serious the concern will be - and for a good reason.
If someone is so sure about the "soul", well i don't even know what we really mean by it, no idea if it exists (and not being able to prove it is no argument for or against it), so i have no idea if AI can have it, or what would it even mean.
Basicaly what i'm trying to say - what is funny now, won't be so funny in the near future. There are plenty reasons not to create AIs, not to work on them anymore. We're really asking for trouble here on so many levels.
youtube
AI Moral Status
2025-07-10T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw1XGNMd_0TeeglvV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugwj-DfzQe3DDot6s5t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugxtl9OgDGZQAyUTpTF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgyqcBLnpsBN6fCH9FN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyJVrAy2lCGh2zCSGR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzP9mr51dCBEttfZcZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwA28IFkeL3U3hfpld4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugw0X8jWo0HRbBh6lrx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgyRtEEB839NkhdAzxl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgylA1eegM2rK7QUne14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]