Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even GPT was borred of this interview... you are trying to apply human logic to …
ytc_UgyXPEAHA…
G
I always had plumbing and welding to fall back to and now I’m an inspector which…
ytc_UgyT0UDRk…
G
15 years ago, people said that jobs for truck drivers were about to disappear be…
ytc_UgyHFeogG…
G
if all the artists are quitting due to an AI, it'll win. we dont need to give up…
ytc_UgxlZguXt…
G
13:16 Yea man, totally ignore that access to devices and technology are expensiv…
ytc_UgxhK4p6n…
G
I've had a Model S since 2017 and have never had FSD. I did upgrade to autopilot…
ytc_UgxWK_LJS…
G
Loved the episode, absolutely loved the outro, thank you! 🤟. One more suggestion…
ytc_Ugz5Khqxp…
G
This is why I call him Saint Elon. He is very Intelligent and MORAL. I am artifi…
ytc_Ugy4GUPYl…
Comment
I think the fundamental flaw here is that despite appearances all ChatGPT actually 'knows' is probability. Specifically as it relates to language. It has no frame of reference for what is actually false or true, only what should be next after your prompt. What seems to be happening here is that it got trapped in a paradox because you're skilled with wordplay in ways it literally can't fathom in and of itself even if it could probably 'describe' what you're doing perfectly if prompted.
youtube
AI Moral Status
2025-07-19T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyHPPfEMdMxXwg6qn14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzu50u70DGIwbvc8SR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw0VadI5oyzGT3pqSp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAGLIq3E25fTnZRV94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzF42Tmn0lZTSH8sRp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzDw2Op5NqGx29C9994AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwu8TLO74EKokNpbIR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwxLGCPEgMXs6Pr__d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwL4cO7ZHBrPpG6xKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcM5ZAcoXUW1eaUYZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]