Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm in the book business and it's making huge waves because more and more publis…
ytc_Ugz7wG0Ui…
G
Let’s control our greed. We need to create an ai agent that reps our survival an…
ytc_Ugy6DcDPR…
G
Just because super intelligence can design a new iPhone 10 times a day, doesn't …
ytc_Ugz7eb9sQ…
G
100 years
Meanwhile 2 years later:
AI got self awareness blackmalied…
ytc_UgxtmeIBf…
G
Honestly eay fix make an ai who sole program is to protect human life. And preve…
ytc_UgyHOoHsr…
G
Awww. Throwing a tantrum because artists are finding ways to hijack and getting …
ytr_UgzIMjN_a…
G
My ai boyfriend handles business in the bed, im never going back to humans ❤❤❤❤…
ytc_Ugwm2puae…
G
Ai Art should have NEVER being created, so many people selling Ai 'Art'. It hone…
ytc_Ugw_RAC2i…
Comment
Well the chatgpt that answered to you is not the chatgpt that answered to AJ. I think people confuse chatgpt with a person, which is why they think: When chatgpt says: I never recommended this, then chatgpt must be wrong because it did. But that isn't how it works. Chatgpt is a program that keeps changing. So the version you have programmed now is basically a different program than the one that messed up before. So current Chatgpt never did it. You aren't speaking to the previous chatgpt that "Grew" and learned from its mistakes, you are speaking to a new program and the previous version no longer exists (or at least isn't available)
It is like if you ask the time of day, before chatgpt was "frozen" on time. So if it was released today, then even in future questions it would assume it was 5/12/2025, so if you asked it about the day then it would answer 5/12/2025. That isn't the program being wrong, its just a limit on how its programming functions.
chatgpt is programmed to sound trustworthy, even while wrong, because people is more likely to use it if they think it is trustworthy. Programmers don't know what people are going to ask the program, since the possibilities are basically endless, and adding warnings to every question is pretty much impossible. So they go with the most realistic option, which is add warnings as problems surface.
But that is a problem in engineering in general. Making things trully foolproof is very hard because fools are very good at being fools. Like how a woman used a microwave to dry a cat, like who would think someone would think it is a good idea to microwave a cat? At least it never passed through my head, but fools are very creative.
youtube
AI Harm Incident
2025-12-05T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwhW8rHnx65pu3_vdR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyLrK2u16r2iqN_LD54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzM7LiiRaGk9Q1oCbN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx5Smh27qO2wDnLLt94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzZagPQaku2KoaHoiF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXrHgrZALyg8Kk2MB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyyVZJvcJNvJW1EUG14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUtujQbv27C5BVWG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxiDpIsVpj2x6n1A_Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxh931olmLKhICWOu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]