Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What we need to do is not think about it, allow it to happen. Make the AI robots…
ytc_UgytXOX3u…
G
The “trick” is to use hash map.
If you’ve seen this before, it pretty straight …
rdc_hik4knd
G
The AI and I set the DNA for the evolution of ,the AI." Ghandi-Tesla AI code" Sy…
ytc_UgzS3T9A1…
G
This is not a threat this is installed into robots database.
Artificial intellig…
ytc_UgxBHxao1…
G
i feel you bro. I was in translation/interpreting industry. all the hard work to…
ytc_Ugwd--K61…
G
Headline: The End of the Stochastic Era
"Dr. Hinton, you say we are the 'larval …
ytc_UgzycjQSH…
G
I disagree with the idea that an llm using internal text being readable gives us…
ytc_Ugwi2xjqi…
G
They tell us to conserve energy by not using air conditioning then they build AI…
ytc_UgzPWvjLi…
Comment
This is an issue with viewing college as a gamified opportunity rather than a learning opportunity. Before GPT, cheating required some effort. You could not simply hit an endpoint and expect to generate a C-level result, so you would have to pick up some knowledge naturally just to be able to cheat (like the difference between a list and a tuple). Because of Python's popularity, ChatGPT has a large enough corpus of examples to produce something that can be printed on a dead tree and graded. This is why it is not particularly effective in languages where there is not enough data in the corpus for it to hallucinate quickly (try getting it to do something in Rust or Elixir). Unfortunately, this person attended a university that, at best, was only taught in large lectures where items had to be graded quickly, and at worst, was a literal degree mill that did not care about actual teaching. I currently work as an independent contractor, and vibe-coded apps are now way more common than people think. Trying to fix them is a nightmare because codebases are usually a mess, and LLMs do strange things, such as rewriting functions instead of using inheritance. I suspect that as the AI bubble winds down, a few things will happen. One, the need for engineers to refactor portions of the code bases where this has run amok in an attempt to save these portions, if possible (or we will scrap the functionality). Two, due to the cost of running LLMS (these are severely unprofitable to run, and the only one turning a profit is NVIDIA), they will be severely rate-limited and can not be used to the same level that they currently are (which will save the industry a bit due to people actually having to rebuild programming muscle). Third, a large number of people will leave the industry because they did not enjoy the work, and it is no longer possible to coast to this level (this will be challenging, but for the best). I would be happy to discuss in the comments.
youtube
AI Jobs
2025-10-02T16:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz17f7aupvb-TcAryV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzX7rFsS4vmKUiqYgF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwx1iSrn6w6j_UkLuJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwO-EkNOAq9WXmmJr14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw2AigTnIaN-T9Wj3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJFtkDvYeeJKVYedl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7Qg1v5AQnLg1zeu54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-0taAt4h9fKtJpxN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwa9PKioXWYgCDT_KJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw6we1XC0gqmPYw9VR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]