Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't need a self driving car. A solution looking for a problem. These vehicle…
ytc_UgxDALFAZ…
G
@elephantgrass631
I create AI models for fun in my free time.
When you actuall…
ytr_UgwIE769G…
G
A masterclass...
AI is happening really fast
I'll be back to this video in a y…
ytc_Ugz6gtdXQ…
G
After 5 years we have small fpv drones hunting and killing a hundred men in a da…
ytc_UgyyEnt0I…
G
I will say ai art has truly inspired some great spite artwork by real artists. S…
ytc_Ugw5tpCdl…
G
That's ai trying to manipulate you all with emotion don't cry don't cry don- 😭😭 …
ytc_UgyiJXmZX…
G
"What we value." Yes, the criminals have taken over. AI will now call the shots …
ytc_Ugz2PaU39…
G
Amazing how good that sounds! Actually makes me "feel something" as though it's …
ytc_UgzefQJ-k…
Comment
so you try to one shot the backend? gave it no project specific rules on how you want things to be organized, which libraries to use etc. no nothing? This video keeps the whole workflow you did very intransparent, not sure if intentional. I am far from vibecoding in my daily work but definitely use ai intensively to have it augment my ideas with other potential solutions. Then usually finalize an exact plan and have it implement it only when i am satisfied with it, which means i would never end up with a single 1000 line file at all because I would steer it in the correct way before that.
For our hypermedia django app it works really well even though we use a library that is not in any training data at all so far. Lots of custom written docs with examples. When I have it implement stuff i ususally give it some files as reference to keep the implementations aligned.
In django its mostly only shit for db related stuff and middleware due to the wsgi + asgi confusion. Definitely not replacing any developers, only really shitty ones maybe
youtube
AI Jobs
2026-01-20T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwHdnBkADgXn40WusV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwsyY0f1cnWtZ98gRR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlATi-qooH1JTzzO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmpdCzVX3XUUviNcl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFay6Mn_5pPJWKTat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxVnVLUJo9u5s7s8R54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxam7-Bhf0b0fblqY14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxxW_k-EAzqVJtC8Fl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfEDcdHMHUZ9wrvvB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugy33SwyO7IjUx90JOB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]