Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The hard part isn't deciding if they deserve rights or not, the hard part is det…
ytc_UghB59eFQ…
G
They probably already can run 90mph without a vehicle..now they have guns, smh 😲…
ytc_Ugy2nZLId…
G
A few problems with this. Nightshade will be useless whenever new models come ar…
ytc_UgwG5vVG2…
G
Sam Altman has proved he doesn’t give a **** about the ethics of AI by signing a…
ytc_Ugw7ueYEj…
G
The crash of the 737s had nothing to do with the autopilot being controlled by A…
ytc_UgxoCQfLR…
G
Most Androids or Humanoid AIs nowadays just looks like remote-controlled ones, t…
ytc_UgxJA8Y71…
G
This is not a new phenomenon... in fact we can see the direct results in many ot…
ytc_Ugxikqk31…
G
Fellow artist here.
Great video highlighting the issues! I truly think that the…
ytc_Ugxjz3vDY…
Comment
So claude and gemini and ChatGPT failed to track one simple bug in my docker compose that was just 500 lines.
I kept giving prompts after prompts after prompts but it sent me unending spiral of new bugs.
I closed the system, went for a walk, opened the codebase next day at 5AM and found the bug in 10mins.
The issue was stale images of services and dependency deadlock. Rebuilt the images, forced cleaned the docker hub and pushed again and fixed the deadlock, and it ran in first go.
I wasted 3-4 hours the previous day on that.
Pheww!! Debugging skills are important and so does the decision making but the ISSUE IS THESE HARD SKILLS ARE BUILT WITH EXPERIENCE WITH TIME. YOU CANT HAVE A TEXTBOOK. AND INDUSTRY IS TRYING TO BREAK THAT PIPELINE. The companies should hire jr, make them sign a bond of 3 years, train 6 months and they will be more loyal to them than their LLM models.
Apparently the tech ceos have lost people's management and empathy. All they want is money.
youtube
AI Jobs
2026-01-06T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugy1rqWwLe9av-Q73vh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyn2BC6tlvtIgRn7Np4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyzQ8FZ6D-amn3wWuh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQdOqIOX2a5lEoVI14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6RUuhTSxoyuA7CAR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy1ziH6xT_guaOCYbR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtG-xWqgJo40X68ht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJyYFMCjGvjV3qFkt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw3jU4FSoxydYSvkql4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjqPXdT4tOyBzOLd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"})