Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGpt: Roman Yampolskiy (the University of Louisville computer scientist who writes a lot about AI safety) is deliberately provocative when he talks about timelines. His view sits on the very short end of the spectrum. A couple of things to keep in mind about why he says “just a few years” and ties that to massive unemployment: Acceleration argument ⚡ He points to how fast AI capabilities have jumped in just the past few years (from GPT-2 to GPT-4, from clunky image generators to photorealistic video). If progress continues exponentially, then a system smarter than humans in most domains could appear sooner rather than later. Task automation vs. job automation 👷 He often warns that once AI can perform all economically valuable tasks more cheaply and reliably than humans, employment as we know it collapses. That’s where his “99% unemployment” projection comes from — not because every job instantly vanishes, but because AI can outcompete humans at scale. Safety and control worries 🚨 Yampolskiy is one of the people who stress that superintelligence isn’t just about convenience or efficiency; it’s about whether we can keep control once we build something smarter than us. Of course, not all experts agree with him. Many argue that: Technology doesn’t diffuse instantly — adoption is gradual, and industries adapt. New jobs and roles emerge even as old ones disappear. Predicting “years away” has been a recurring theme in AI for decades, and the field is littered with over-optimistic forecasts. So Yampolskiy is basically ringing the alarm bell — “if this is coming quickly, society isn’t ready.” Whether you take his timeline literally or more as a call to prepare depends on how you read the speed of current breakthroughs.
youtube AI Governance 2025-09-09T07:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwHAWWt603wMsz49P14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKCcgwg0crUzvPf9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwnhYx7T9SB5nO9jIx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGPOgrhad2JKd0WxB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxuvj5CHm9sKFN1UPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxHUqdnVdX-m4vffYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3cBBudNf9kK9lgPd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyfOx7V3EgteZJJax54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJq5mRNcIkLHrZoZd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwLRxvOdNAa82UVd2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]