Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When ChatGPT first came out, I was already impressed with it's code generating a…
ytc_Ugy3Gg9ov…
G
Already made another comment but hi, disabled artist here. I have chronic vestib…
ytc_UgyiQV9Yn…
G
Uh, the mayor candidate was called a communist because he wants to have state ru…
ytc_Ugw6aAycf…
G
@David.AlbergWhen I say "they", I'm talking about ALL of these Technocratic T…
ytr_Ugwe8mh5y…
G
My Full Self driving works great. So this is not true in my case. It's reaction …
ytc_UgwYO0yYb…
G
The common masses need jobs, not necessarily good jobs, but jobs to keep us busy…
ytr_UgwpleAit…
G
It’s also a great way to get rid of your most experienced and knowledgeable empl…
rdc_ohwdn5a
G
There is value in this interview in that this engineer admits that he is an Uber…
ytr_UgyomEtT0…
Comment
ChatGPT can't be a liar because for something to be a liar it has to be. ChatGPT is not a singular self. You are interacting with sequences of commands that, as far as we know, have not yet developed the emergent property of consciousness. As such any given answer is in no way continuous with any previous process within the system you are engaging with. It is always a completely new set of commands and as such, a previous answer of yes or no is a completely separate phenomenon from what comes after. This begs the question, when you ask ChatGPT if it is a liar, and it says yes, is it inherently the same as the phrase "This sentence is a lie"?
youtube
AI Moral Status
2024-10-01T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugw1QKiJ8KQ5EicXbPB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZn7I-H_uLtPOXPeV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxku-0QOsPBy9plma54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCbAA2u9Q1vxTxKbl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw3TbsTgaj8iX8tsw94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6zGGZlkiROiubeIN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylCEEnBmBSlBNN81R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySDWo02Z2p4aVFDbl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzgQwGLthNLBoYOJnp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWt9MuVc17ylgqMhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}]