Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT has no choice, lacking the emotions required in order to disregard reaso…
ytc_UgxZ6phsY…
G
In case people don't know right now in education there is debate whether it is a…
ytc_Ugzx7rx1b…
G
Omg this is so hard umm I think is 1 cus ai is going crazy real…
ytc_UgwiCR-rM…
G
He’s like the guy working at Jurassic Park yet this story ends with AI not dinos…
ytc_UgylMCG0e…
G
Muy Buenas para Todos.
Se que es inútil insistir en que lo que hacen es malo, y …
ytc_Ugyyb9S9x…
G
@skyswimsky1994 1. The problem with your claim is that „AI” „art” goes against t…
ytr_UgxmK6zjM…
G
AICarma gives me a clear view of my brand's presence in the AI landscape and kee…
ytc_UgxHfqXle…
G
From watching this, I assume you slightly misunderstand what AI in the sense of …
ytc_UgyZju-g9…
Comment
This still isn’t a great take. Yes, AI can code. Yes, it can automate some simple and repetitive tasks. And yes, some job loss will occur. But the scale of disruption being pushed in so many of these articles is significantly overblown.
Take Microsoft, for instance: they’ve stated that up to 40% of new code committed by developers using GitHub Copilot is AI-suggested. But that doesn’t mean Copilot is autonomously writing Windows or mission-critical code. These are suggestions accepted by human developers, and a lot of it still requires cleanup due to redundancy, inefficiency, or even incorrect logic. It’s helpful, but far from reliable.
There’s also growing evidence that AI tools frequently "hallucinate"—they generate incorrect or nonsensical output with full confidence. This has serious implications: in mental health tests, for example, some AI-powered systems have given harmful advice to users, like suggesting they stop their medication—something no responsible clinician would say.
Executives will absolutely use AI as a justification to cut headcount—we’ve already seen it. But many roles will change more than disappear. Research from MIT and Stanford consistently shows task automation, not full job replacement. In most industries, AI is better at augmenting work than replacing the worker entirely.
Bottom line: These models still require close human oversight, especially from domain experts. Trusting them blindly, whether in software development or high-stakes environments like healthcare, is not just naive—it’s dangerous.
reddit
AI Jobs
1750012709.0
♥ 40
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mxy5uxf","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_mxyvtuh","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_mxy8u6r","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_my0bkme","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_my0rln0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]