Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem is that the folks in the AI industry are fighting for funding, clout…
ytc_UgwmQ6GVT…
G
It will always be trickier to tell ads apart because AI writes like corporate HR…
ytc_UgxH_Et3s…
G
I have a Tesla and I absolutely love it but I went into it knowing that we are j…
ytc_Ugzua0nvI…
G
AI coders produce bugs for sure. I produce 5x more bugs and take 100x more time …
ytc_Ugzlkng19…
G
First of all there is no need for immigration! We can't take care of what we al…
ytc_Ugx9G6cXR…
G
@Insertcoolusernam If they would be mocking it, they would exaggerate the mistak…
ytr_UgxOe1ClF…
G
Agi and ai both and all other things which comes through ai are not able do a di…
ytc_UgyIX9EYJ…
G
You’re wrong SMR. It’s very clear Musk has focused more on autonomous driving AI…
ytc_Ugx63Ew_a…
Comment
Hi, I think you probably know that your approach is based on a misunderstanding how the technology works and your interrogations won't produce meaningful output. If you ask ChatGPT to confirm that it "lied," and you then use its affirmative responses as evidence of dishonesty you are forgetting that you're talking to a machine. These responses reflect the AI’s attempt to align with the conversational expectations that you define, anything else you read into it is a fantasy.
In summary:
You insist that the AI understands what lying means.
Frame its outputs as lies using anthropomorphic assumptions.
Conclude that the AI is therefore dishonest and unreliable.
But you ignore that ChatGPT doesn’t evaluate its own outputs in terms of truth or falsehood. It generates responses based on patterns and probabilities, not intent.
I hope this is useful.
youtube
AI Moral Status
2024-12-27T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzxNGnv3SRAdSeOx0t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwU_xzFdeG9VZaGzp54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwNNcLMrMO0o4CVjZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_Ugyi40wdMstCsslgBLR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwRUa0h6b-q9yV1CMB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyoT8UIF4sXHUXarRB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7tG64P6U-j8P2m5h4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxlo4_v1-Nbs8Rn1yR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzP1AvN8PgOCMsyeBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxkiOjdeslh-9NmFL94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]