Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In this interview, they talk about AGI arriving in 2027 and 99% unemployment by 2030 as if it were almost inevitable. They claim that almost every job done in front of a computer will disappear, and that even programming or prompt engineering won’t be safe. However, in April 2026 the technical reality is very different: Neither Gemini from Google nor any other current model can take the full Inkscape codebase and add proper professional CMYK support. One single human developer (Martin Owens) has been working on this alone for years, step by step. I tried to build a simple web app in Google Apps Script that connects to Sheets, and not even Google’s own AI (Gemini) could make it work without massive frustration. If AI is supposedly just months or a few years away from surpassing humans in all cognitive domains, why does it still fail so badly at tasks that an experienced human engineer handles without any problem? AI is excellent at simulating patterns and generating code that looks correct, but it still fails miserably at real context, professional judgment, trade-offs, and long-term responsibility. This extreme hype about AI capabilities combined with extinction-level alarmism is causing real psychological damage: anxiety, depression, insomnia, and a sense of uselessness in many people who now believe they’re already obsolete. And one honest question: Isn’t it possible that many companies are simply using all this AI hype as an excuse to lay off workers, increase the workload on those who remain, and cut costs — while conveniently blaming it all on “the technology”? Finally, a question for the interviewer: You pinned the comment “What do you think about AI safety?”. Why don’t interviews like this include hard, concrete questions about the actual capabilities of today’s AI? Is it because if they did, the interview would end instantly?
youtube AI Governance 2026-04-13T02:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxC82VOVjLEBr2TDNp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgysYFSMIv3QwDPrsV54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzxw5UDGB2sFBtJ_Fp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxc5LFAys4rIkAjmxN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXSqQRLgCT94n_GyF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgziL_jFkiKVCSCECUZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw6ehQZ8qiqYL6dI954AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwnwB40KDT3XMPmyUR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy882UNNiZcujcVVRl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwsZ64HpYVONiMKg554AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"} ]