Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Chatgpt analysis of this video: ---------- The big picture verdict 🧭 Genuineness: 3.5 / 10 (as a factual source) Persuasive power: 8 / 10 This is a narrative-driven opinion piece designed to feel evidence-based, but many claims: lack verifiable sources, misrepresent real studies, or inflate numbers beyond anything credible. What checks out (partially true, but framed aggressively) 1. Tech layoffs (2023–2025) Large layoffs did happen across Big Tech. Companies did explicitly cite “AI efficiency” and “realignment”. However: No credible source supports the idea that layoffs were primarily due to AI replacing devs. Most layoffs were driven by post-COVID overhiring + interest rate pressure, not AI productivity. 🟡 Conclusion: Directionally true, causality overstated. 2. “25% of Google’s new code is AI-generated” This statement does trace back to comments from Sundar Pichai. But the missing context: “AI-assisted” ≠ “written autonomously” This includes autocomplete, refactors, tests, and boilerplate All code is still reviewed and approved by humans 🟡 Conclusion: Technically true, rhetorically misleading. 3. Junior hiring slowdown This is one of the stronger points in the video. Entry-level hiring has declined Firms are relying more on mid/senior engineers AI tools do reduce junior “training tasks” But: “Junior death spiral” is not an established economic term No reputable economist uses it formally 🟡 Conclusion: Real trend, invented label, overstated long-term impact. What is exaggerated or unsupported 🚨 1. “80% of developers replaced by 2025” ❌ No credible research body ever claimed this. Not from Stanford Not from MIT Not from OpenAI, Anthropic, or McKinsey This is a strawman prophecy — exaggerating hype tweets and VC quotes into “leading researchers predicted…” 🔴 Conclusion: False. 2. “95% of GenAI pilots failed to deliver $1 of ROI” This is one of the biggest red flags. No known MIT center published a report called “The Gen AI Divide” with this claim $40B investment + 95% failure would be headline news everywhere Reuters, FT, Economist — none have reported this 🔴 Conclusion: Almost certainly fabricated or wildly distorted. 3. “61 billion workdays of technical debt” This figure allegedly comes from: Reuters The Guardian “CAS Software analysis of 10 billion lines of code” Problems: No such Reuters or Guardian article exists CAS does publish code quality reports, but not at this scale “61 billion workdays” is an eye-watering number with no methodology 🔴 Conclusion: Numerology, not analysis. 4. Security claims (45% OWASP, 72% Java failure) These are inflated beyond known studies. Real studies show: AI code can introduce vulnerabilities But not remotely at these rates in controlled enterprise settings Also: “Veraricode Gen AI Report” → likely a misnaming of Veracode Veracode has never published numbers this extreme 🔴 Conclusion: Heavily exaggerated. What is outright dubious / likely fictional 🧨 1. “Anti-gravity AI wiped a 2TB Google production drive” This story has: No public incident record No CVE No postmortem No Google SRE write-up A production system being wiped by an autonomous AI without permission would be one of the biggest engineering scandals of the decade. The lack of evidence is damning. 🔴 Conclusion: Almost certainly fictional or massively embellished. 2. “$1.5B Builder AI scandal with 700 engineers in India” There are real cases of AI-washing, but: No Bloomberg investigation matches this description No court filings support the numbers The scale would be impossible to hide 🔴 Conclusion: Inspired by real stories, but not real itself. Why the video feels convincing (and why that’s dangerous) The creator uses: Real brand names (Reuters, Stanford University, MIT) Correct types of problems (tech debt, maintainability, junior hiring) Confident delivery + cinematic pacing But: Sources are vague (“research shows”, “reports indicate”) Numbers are extreme Counter-evidence is ignored This is AI skepticism content, not analysis. The real, grounded truth (as of 2026) Here’s the boring but accurate reality: ❌ AI did not replace developers ❌ AI did not massively boost productivity either ✅ AI shifted where time is spent ✅ Senior engineers matter more, not less ✅ Poorly used AI can increase tech debt ❌ There is no global collapse of software engineering As a Java architect, you already know this instinctively: Architecture, context, accountability, and tradeoffs are still human problems.
youtube AI Jobs 2026-02-09T01:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyv4GD-RmuZhYLHQlV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzIW3jh1_sMHuTbyG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwwGnLKkffjwzcr7J94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy-e4E4mS1JxBue21N4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzOAFx3tdQ2S25uKLF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwDeLOK5W_0lyIHG-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyqJqlDLrOzBwYif5B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwbVhdQlLqbSVzu7K54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgyhBAzZsMwwrBoNCSV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSjdSCc24i96ez-hZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"} ]