Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was thinking yesterday about how AI might soon start learning very fast. Train…
ytc_Ugz8xnDko…
G
Todavía falta, pero se acercan.
Sí mejoran la piel con imperfecciones, y ese mo…
ytc_UgxbK_m32…
G
@Phoenix3FighterYou’re American, it’s common for Americans to see themselves in …
ytr_UgzNylQIj…
G
Imagine if he'd killed someone after consulting chatgpt. He would still be to bl…
ytc_UgyJ7tumL…
G
It's been there since 2017 it replaced AI at my job. In 2023 Chat GTp could do H…
ytc_UgzHVdNb0…
G
@user-tq8wd6hi1jname, thank you for your comment! Robot McGregor definitely know…
ytr_UgzdUBq-v…
G
@simonsphinx3920 it's an AI toy. It doesn't have the capacity to predict future …
ytr_UgxufTUQ3…
G
That makes a lot of sense honestly, I think all you need to do is make a super t…
ytc_UgzL2-SrM…
Comment
AI engineer with a masters degree in AI here: though the premise of this video is kinda right there are some big problems with what you are saying.
We have seen a huge decrease in junior positions for software development. AI can objectively outperform any software engineer in terms of quality and speed. That is backed by all major benchmarks and anyone that has spent 5 minutes with codex or claude code knows it. Tye problem is that it is often judged unfairly because of the perfectionism bias that believes that because it is a machine it has to be perfect every time, while a human or even a group of humans would not be able to do it better.
It is needed indeed to have a human in the loop, but the tasks that the human has to do have massively evolved (hence the decay in junior positions) many of my colleagues report a drastic surge in resolved issues thanks to claude code. Tasks that took them weeks now take them hours or minutes. Same goes with your tesla example. AI can prevent accidents but people will focus on the one time it failed and not realize that it outperforms humans and a person would make the same mistakes and more.
And with vibe coding what you normally have is a problem with how the user addresses the problem. Just like a humans that is given a task, if it is too broad it is going to lack on the details. But if you do it in a granular and progressive manner results are exponentially better. And this can actually be done by AI too. An agentic approach has proven to solve most of the issues that you complain about in this video
youtube
AI Jobs
2026-03-22T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmipkYcG0V1B-7mAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyqDLM_eZAdE_jFI4Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy6qTjctdRjSf4WfH14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzMxYo5wgqXlo1iMdJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwlqkFh6XO2IyI3SA94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYO-T9WNL5lAqezNx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_6MJjkl9BJM6v8WV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRncR6RNC1mEGJUn14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwuII6PsnL4NWmPfp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxRFRok6yzaxfz7uoZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]