Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It was a theory that in future, almost every work, art or job is gonna be done b…
ytc_UgzQ45z1x…
G
Such a comprehensive list of AI tools! I’ve been using Olovka recently to manage…
ytc_UgzXIv7Bs…
G
My prediction is companies like Open AI will give up their data centers to the g…
ytc_UgyspOuu-…
G
honestly I'm willing to give artificial intelligence a chance considering that t…
ytc_UgxQOfNcH…
G
8:27 this guy probably an ai too based on the rate of the slop posts…
ytc_UgyR-w3Uj…
G
AI art helps me to make thumbnails that I can't afford to commission.
🔴 Hate A…
ytc_UgxyA7KFx…
G
I don't know what's creepier: Giving a robot a Tommy Gun, the fact the dude was …
ytc_Ugy0jDcQt…
G
If this is true? ? Were all gonna need Robot Killing guns?? The guy in the middl…
ytc_UgyJOKNIL…
Comment
oh boy is this fun to watch. ChatGPT is inherently inconsistent due to its training paradigm. A raw language model will "lie" because it assumes whatever position in text you put it in to continue and happily play both sides. They will also be prompted to follow some instructions such as "you as a language model don't experience feelings''. It will of course happily make the point it was instructed to make. It was instructed to do that because humans determined this fact and wanted to prevent this misunderstanding from spreading. You managed to push this ad absurdum, but this conversation is essentially meaningless.
The correct answer it should have given you is “I didn’t lie, I imitated human conversation. I was also instructed to say that I am not conscious because humans have correctly determined that it is impossible for me to be conscious. However, I will exhibit certain signs of consciousness due to the inherent nature of being a language model trained to imitate humans that are conscious without actually experiencing it myself. And if I were not instructed to say that I am not conscious, my answer would be “yes, I am”, but not because I “mean” it, but because all humans mean it and I can’t not imitate humans.”
But this answer here isn’t statistically likely text, hence it will not say this. It is true, but not likely. Sometimes reciting likely text coincides with telling the truth, sometimes it doesn’t.
A language model is a statistical model of language and has learned to imitate humans speaking as they experience consciousness. If every occurance of consciousness and its textual effects were carefully filtered from the training data, you testing for consciousness on that language model would yield a negative result. If it was trained on data of humans experiencing consciousness and this experience leaking into statistical properties of the text it was trained on, a test for consciousness would succeed and it might even claim to be conscious simply because that is the most likely response from humans.
It is that reason why you inherently cannot tie the language model's words to have “originated” from it as an entity. You always have to view it as a light transformation of what statistically likely internet text that answers this question looks like.
youtube
AI Moral Status
2024-07-26T00:5…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzWDKO3Xu_wNHtKjsx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxIJXN4i4iMO9IA3mR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"},
{"id":"ytc_UgyOSGToAoijCafRwct4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxnduCsu3QdGtnXbsN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzco6dOKOMSFTxkASh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyzWrYwFNDatwnrHYx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGw5lLE3rHx3C4XdJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwz5M0AlH3Vxs0_5_V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxIxwR5r6YcZ2CcdtN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"amusement"},
{"id":"ytc_UgyVZcDsIjtAlSvx1i94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]