Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Forrest’s entire argument hinges on a single, fragile axiom: "If the AI breaks, a human must be able to go in and fix the code." This is a category error regarding the evolution of abstraction layers. Forrest worries that "vibe coders" (developers relying on AI) won't understand the underlying code. This mimics the arguments made in the 1990s against Garbage Collection (Java/Python) or in the 1950s against Compilers. The Argument Then was : "If you don't manage your own memory (malloc/free), you won't understand how the computer works, and when a leak happens, you'll be helpless." We accepted a minor loss in low-level control for a massive gain in cognitive bandwidth. We moved "up the stack." AI is not a "cheating tool" for writing code; it is the next layer of abstraction. Just as a Python developer does not need to audit the assembly code generated by the interpreter, the "Vibe Coder" of the future will not need to audit the syntax generated by the LLM. The "code" becomes a compilation artifact, not the source of truth. The source of truth becomes the prompt/specification. He assumes software must remain simple enough for a human to read. This is a limiting belief. Modern neural networks are "black boxes" of floating-point numbers that no human can "read" or "audit" by eye. Yet, we use them. We are entering an era of Probabilistic Software. Future applications will be millions of lines of code generated dynamically. Expecting a human to "read the code" to fix a bug will be as absurd as expecting a CPU designer to manually inspect billions of transistors with a magnifying glass. We will not use humans to debug AI code; we will use other AIs (Unit Test Agents, Formal Verification Agents) to audit the code. Then he argues: "If AI can build QuickBooks for free from a single prompt, why would anyone pay for software? The industry value drops to zero." This is a classic misunderstanding of Jevons Paradox and Induced Demand. He assumes the demand for software is finite. (i.e., "We need 1 version of QuickBooks, and once we have it, we are done.") As the cost of software production drops to near-zero, the demand for software explodes. Today, you buy QuickBooks because hiring a dev team to build "ForrestBooks" is too expensive ($50k+). In the future, if custom software costs $0.05 of compute, every individual will run hyper-customized software tailored to their exact daily needs. The "software engineer" transforms into a Systems Architect or Product Orchestrator. The value doesn't disappear; it shifts from construction (writing syntax) to definition (identifying the problem). He is worried that seniors will lose their skills if they rely on AI. This is functionally true but teleologically irrelevant. We "lost the skill" of calculating square roots by hand. We "lost the skill" of memorizing telephone numbers. We "lost the skill" of navigating by stars. By offloading the "syntax" and "boilerplate" lifting to AI, human intelligence is freed to focus on System Design, Architecture, and User Intent. The "Vibe Coder" he mocks is actually the prototype of the high-level Semantic Engineer. The skill set isn't disappearing; it is migrating. The skill of the future is not "knowing React syntax," it is "managing a swarm of AI agents to build a cohesive product." Forrest is falling into the "Horseless Carriage" trap. When cars were invented, people asked, "But where do you attach the horse if the engine fails?" Forrest is asking, "But who will write the code if the AI fails?" The Answer: The AI won't just write the code; it will write the tests, run the debugger, and deploy the fix. The role of the human is no longer to be the mechanic of the code, but the driver of the destination. His skepticism is grounded in a transitional friction (current AI is imperfect) rather than the terminal trajectory (AI scales exponentially). From a PhD/Intellectual perspective, his video defends a dying guild (manual syntax writers) rather than envisioning the inevitable paradigm shift to software as a generative commodity.
youtube 2025-12-24T22:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwOaYclmw6hzZ9aic54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwImB_UQIWaT7aUieN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz5XKlvRBxYvjBbOKl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyf83Zqk0RZXi6jazR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzbDaqE3pquBYHMa7B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxP7z-Nvn2yWQJ0lGN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxOkCaNsg8y2fhyyx94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwIuOVCDESs38qQz2R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQmh8pi6kudxygF-Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxdw08m8QRVwcMB8dB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]