Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This still isn’t a great take. Yes, AI can code. Yes, it can automate some simple and repetitive tasks. And yes, some job loss will occur. But the scale of disruption being pushed in so many of these articles is significantly overblown. Take Microsoft, for instance: they’ve stated that up to 40% of new code committed by developers using GitHub Copilot is AI-suggested. But that doesn’t mean Copilot is autonomously writing Windows or mission-critical code. These are suggestions accepted by human developers, and a lot of it still requires cleanup due to redundancy, inefficiency, or even incorrect logic. It’s helpful, but far from reliable. There’s also growing evidence that AI tools frequently "hallucinate"—they generate incorrect or nonsensical output with full confidence. This has serious implications: in mental health tests, for example, some AI-powered systems have given harmful advice to users, like suggesting they stop their medication—something no responsible clinician would say. Executives will absolutely use AI as a justification to cut headcount—we’ve already seen it. But many roles will change more than disappear. Research from MIT and Stanford consistently shows task automation, not full job replacement. In most industries, AI is better at augmenting work than replacing the worker entirely. Bottom line: These models still require close human oversight, especially from domain experts. Trusting them blindly, whether in software development or high-stakes environments like healthcare, is not just naive—it’s dangerous.
reddit AI Jobs 1750012709.0 ♥ 40
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mxy5uxf","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_mxyvtuh","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_mxy8u6r","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_my0bkme","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_my0rln0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]