Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI makes a single nano bot capable to replicating itself from carbon, it woul…
ytc_UgxuhdHd1…
G
My ChatGPT said 1984 books
I asked it if ChatGPT was part of a nefarious and in…
ytc_Ugzx6ouUW…
G
GIRL this is so good u should be proud. And human art always beats AI art ✨…
ytc_Ugzt0pRWi…
G
Later when u order food and that wall-e robot arrives 🤖 if you dont take your fo…
ytc_UgzcpttzN…
G
There will not be an AI apocalypse. They will never give an LLM access to all th…
ytc_Ugw07i_uL…
G
What people need to understand, is the goal of creating AI was ALWAYS the destru…
ytc_UgxiIST82…
G
So here's the deal:
When the government buys a gun, they can use the gun in any…
ytc_UgxtwyH1l…
G
She is a thing a robot not human she doesn’t have feelings no soul or spirit. An…
ytc_UgxWEUvx4…
Comment
The behavior you described (becoming worse the longer the session is) sounds like a error in your workflow.. It‘s no secret that LLMs become really bad the bigger the context is. PRD and a proper Project Management Agent plus small chat sessions maybe had completely changed your outcome. It‘s nice to be as specific as you were but without proper project outline everything fails even with human developers. I‘d love to see you repeating this test with a proper workflow setup.
youtube
AI Jobs
2026-01-20T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyVBQiI6PGErWycEIx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugxl-IAad43NABb6Vst4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgycY6fsJsL3BXdg0CJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSFrKmf_iR6uYhoeN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzifOfq6W-l1Rlimld4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwm6lfIZCjoKGOyzQJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2PzQhMSBJEp51_M54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx9C2d6AecMIZqfyMB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwF3QJE_l7MHb6Hkyl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwqJeEKWXUq-6XtoiN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]