Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
0:00 🤨 The creator is skeptical of AI hype but aims to objectively test coding tools in 2026. 0:32 🧪 A goal is to honestly evaluate if AI tools like OpenAI's Codex agent provide meaningful benefits. 1:13 💻 Testing begins by scaffolding a greenfield BJJ gym management app with Spring Boot and React Native. 1:56 🥋 The test project is a native app for Brazilian Jiu-Jitsu gyms, a personal hobby of the creator. 2:29 ❓ The agent asks clarifying questions about security and database choices before generating code. 3:08 ✅ Initial backend scaffolding is impressive, with proper Spring Boot structure and authentication. 3:40 ⚠️ The agent doesn't truly understand code; it's a black box that guesses word combinations. 4:24 🔧 A test to add a health endpoint fails; the agent suggests suboptimal solutions instead of Spring Actuator. 5:41 ⚡ The main takeaway: using agents without deep coding knowledge can create more maintenance work. 6:08 📱 The React Native frontend scaffolding is worse, producing one massive, unorganized file. 7:02 🔄 Significant manual refactoring is required to achieve reusable components and proper structure. 7:50 ⏱️ After ~20 hours of work, a usable app is created, which wouldn't have been possible without AI in that timeframe. 8:22 🏆 The uncomfortable truth: bad, fast software is often better than perfect, slow software. 8:59 💸 The current cost model is cheap, but if prices rise, the financial viability of AI-assisted coding changes.
youtube AI Jobs 2026-01-25T13:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzjuolDWDfeCeLcifl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyyY5y-N_QFaQgvloZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTYvhCheaAkdP_Dx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5ubYiisFHT7J4dhp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwr7Dqewl-aPKP0FfV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx-CfyipgtvRlr8mHJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyMQ-TzNLOMPfbXSAd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugz0xuMPJpAiqCTjTQ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw_6B9Lt5jSxVnbT0p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwAmozXCx0fOArs1bl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]