Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
to points:
1. In my field of coding, ai coders call themselves vibe coders, so I…
ytc_Ugz5DYvwA…
G
The greatest of the individual can ever have with increased automation is for re…
ytc_UgwllILMo…
G
I truly love the fact that they explain this in a way that people who use AI can…
ytc_Ugyg6ISG4…
G
But a couple more questions. Serious ones.
1) If AI replaced all jobs, how do c…
ytc_UgyKB6csE…
G
Everytime you interview someone about AI my anxiety goes through the roof. Can y…
ytc_UgwJbTdtD…
G
Although editing is an art style, I believe you are supposed to show off and cre…
ytr_UgzahA4pX…
G
1 scroll after this short and im given an ad for an ai art software…
ytc_UgzjaqUQC…
G
Fr- if it’s there, might as well be used. I’m very very against ai- but I will a…
ytc_UgxP1QiAT…
Comment
A really fun and interesting video, although I remain unconvinced, since the chatbot never actually overstepped the boundaries of its own code. The fact that you can kind of drive it into a corner with clever sophistics is just another proof of how complex and controversial this thing called human is, our whole way of thinking and experiencing. 'Experiencing' being the operative word :)
In my line of work, I get to use Chat GPT for writing texts. And I can say FOR SURE there is no trace of consciousness there :) I’ll try to explain. Yes, it does know the dictionary definitions of words that it puts together to convey some kind of sense. But it doesn’t seem to grasp the true MEANING of THINGS denoted by these words. It never had anything to do with the actual THINGS, only the words pointing at them. It handles language on a molecular level, operating individual semantic bits and fusing them together based purely on their physical properties, so to say. But in doing so, it fails to give these properties any phenomenological relevance, since it doesn’t actually have experience of its own, doesn’t know what it’s like to experience something, even having a full dictionary definition of it, and thus can't create anything that’s truly human.
We, humans, derive meaning from experience. Our consciousness is built exactly like that and can’t ever bypass these ‚factory settings.‘ Hence come all the emotions, desires, willful actions - self-awareness and human self as it is. Because if you don’t experience, you don’t exist, there is no one in there to interact with the world outside and thus no way to really 'process' this outside world. So no matter how well Ghat GPT knows, say, how much an average apple weighs, it doesn’t know the first thing about how weight actually feels, how the weight of that apple feels when you’re holding it in your hand. It doesn't know what weight really IS, for us, and that's where any of its effort to implement the notion of weight into a meaningful, like truly meaningful text stumbles and crumbles.
And that’s just one example, it doesn’t really know anything about anything. So whenever it tries to write a text, this is either some kind of a tech manual (which is fine for some purposes) or a completely void, blank, artificial thing, like one of Madame Tussauds’ wax sculptures. No matter how skillfully the wax is shaped to imitate human flesh, there’s no life in it. You know like, when you’re reading a book, you kind of stop being yourself and don the skin of the heroes or the mindset of the narrator, and hence the whole aesthetic experience, the whole pleasure and personality-expanding effect? With Chat GPT, it never works, because THERE IS NO ONE IN THERE, no skin to try and feel as your own, no mindset to assume and see how it works for you. Just a very good (and sometimes not even so:) imitation, but a dead thing if you poke it with that phenomenological stick. So no, no consciousness so far, and I frankly doubt there ever will be :)
youtube
AI Moral Status
2024-07-26T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzS1qMP90XW9hY4yU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAMvDIFkabXeRFfSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSX_G1ls5FmIY_1Y94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugytm74TlVWH9--34Yh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwy6D_M9nPkz2AJUqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5A-57QIgx3pgx6114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZtmAI3xOVeDCDZrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzpixWTtCNr_jzH4GZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZw4k-0V13mXPslat4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7UywXWDKfmk74Is94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]