Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If people don't have money who's going to buy the product? Ai. Buying a.i. produ…
ytc_UgxgdGnSf…
G
Or is the problem that AI is completely unbiased and doesn’t take historical inj…
ytc_UgzttizfC…
G
Your claim that LLM is not true ai and that it does not have the capacity for re…
ytr_Ugzr_UUyb…
G
Elon will never respond to a lower life form such as my self. AI is a program it…
ytc_UgzNTVAys…
G
I use AI a lot, and treat AI like a trainee. Useful for the mundane things. It g…
ytc_UgxaWtoGx…
G
"Weak" A.I needs human input to function and it refers to actual existing algori…
rdc_g103p7j
G
The AI situation, for me, is like ordering a pizza and when it arrives, claiming…
ytc_UgxT816rU…
G
AI runs on transistors used as on/off switches. A CPU analog could be built wi…
ytc_UgwnlHIPQ…
Comment
The critizised statements of ChatGPT were in fact more true, than any human statement can possibly be in such a moment: When humans say things like "i am glad you asked this question" or "i am excited to talk to you" then this is not essentially and 100% true or even real. It is also a result of social programming and learned behavior. Sometimes it´s (partially) true, sometimes it isn´t.
When humans say things like that, they are basically reassuring the other person of their willingness to talk, help or whatever. And no one can be sure that this willingness does not change at some point. ChatGPT did the same. Although this bot is not "excited" or "glad" in a human sense (it would be the users fault to assume that this is possible or nessessary), it absolutely meant what it said: That it will take part in the conversation and it will be fully available. And this promise is nearly infinitely more true than such a promise from any human could ever be (technical errors, downtime or power plant failures aside).
What Alex does here, is ignoring the underlying function of communication. I think he does this on purpose, cause this video should be entertaining and clickbaity.
To really "lie" perhaps one basic requirement must be met: to have an underlying intention other than what one claims to want. And i think this is more true of any human alive than of a program built so serve.
youtube
AI Moral Status
2025-06-25T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAmwGnQSQj9bJFiU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxS-KNJxochd5BiPdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwZUzQle3ydXju6A-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzNC5hx-1ucbt19vGJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx8iwmx1IPuG4vX4_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5KPLYSr8ZuCaBbvJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyyKeNA9Gx7b6GvMTh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyem7-_Vy0TXCF_hBt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzS2zbnX3l2XtaeCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugzsr6nZSU-YOzo4qYN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]