Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LLM cannot automatw everyting in software, they wiil be used as a tool fir lever…
ytc_UgzxXV2xE…
G
Who fuels these trucks also what prevents mechanical issues which leads to a cra…
ytc_Ugz-JIpFr…
G
I think it just depends on how you use ChatGPT. I’ve gotten smarter, more articu…
ytc_UgwOrZ_gJ…
G
@ianjohnson3770 First of all, my comment pertains to how the video begins -- I …
ytr_UgxmJQKnG…
G
Its good!? Yes. But the warehouse people who are employed there, its just a matt…
ytc_Ugwj09GxN…
G
Elon musk is literally developing A.I robot himself ....and then tells its dange…
ytc_UgzVN8kFi…
G
I would love to talk to this man myself! All these Sci-Fi scenarios, are just th…
ytc_UgzymLm3H…
G
One of my biggest problems as a student is that I almost feel like I’m being pun…
ytc_UgwPBXr1o…
Comment
If we take the "What Is It Like to Be a Bat?" definition of consciousness, then the best we can do is ask ourselves, "What does it feel like to be ChatGPT?" What does it experience?
The language models receive some text as input, and give back text as output. Does it "hear" the text? Does it "see" the text in a black screen and then think about it in words? When "thinking" of the output, does it hear it again? How does time pass for this AI? Is it continuous even when it's not working, or is it only when it receives an input?
As far as I can tell, there is no computer program that can make a computer conscious, because all that a computer can do is work with bytes. At the end of the day, it's always just a processor shuffling bytes in memory. It doesn't matter how complex your software is, the hardware is still incapable of consciousness.
Even if you had a computer that perfectly mimics what a human would do (with cameras for eyes, and microphones instead of ears), we could still affirm that it isn't conscious, because the data that it's processing is still discrete. This is more or less what John Searle was saying in the paper that coined the Chinese Room argument, that you need more than behavior if you want to assert that something is conscious.
youtube
AI Moral Status
2023-08-24T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyZMUEYPkEMBOu1PLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxsexWjYwaKvBWX0KF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugztb5F7S8mkBmMTctV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy1zofO29LOVjSZHXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwia89r2Fd87LS9fLN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9B68jrdzL4K8Q0T14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwUVoPfJVEd972_w4N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyiOaMGEhKnq7rQAeZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwVWJQA1dQ6vm3UeuF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxwta0vfDgqfdc3xbx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]