Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, he DOES have the right to express himself with AI "art". He just doesn't hav…
ytc_Ugx-VzXnb…
G
I believe art is a language based on emotional responses.
Emotions that an ai ca…
ytc_Ugwq8ieMJ…
G
I am by no means an expert. But these days I woke up with this thought in my hea…
ytc_UgxbQhurP…
G
I don't see it taking over trucking. If anything it'll make the truck drivers li…
ytr_UgwdJxdS9…
G
I would hope, when AI takes over our workforce, we would all be given universal …
ytc_UgwZcg7wS…
G
next Varun is gonna say AI replacing tornhub 😂people. I'm waiting for this...com…
ytc_UgxQa6hbr…
G
In my opinion. I think Google wants to create sentient Ai but I believe that wil…
ytc_UgxqsQxyI…
G
I think the bottom line is clear, if you use AI for the good aiming to mske ppl’…
ytc_UgzFeKOSp…
Comment
The "increased volume" argument (known economically as the Jevons Paradox) is the standard optimism sold to radiologists. It states: If reading scans becomes cheaper/faster, doctors will order way more of them, so you'll stay busy.
While that is true, it hides a much uglier reality about money and workload.
Here is the 100% honest truth about "dilution" that most articles won't tell you:
1. The "Hamster Wheel" Effect (Dilution of Value)
Yes, the volume of work will explode. But reimbursement per scan will almost certainly crash.
• Today: You might get paid, hypothetically, $30 to read a chest X-ray.
• In 10 Years: If AI does 90% of the work, insurance companies (and Medicare) will not keep paying you $30. They will drop it to $5.
• The Result: To make the same salary you make today, you won't just need to read more scans; you will need to read exponentially more scans. You become a supervisor of an algorithm, clicking "Approve" 500 times a day instead of deeply analyzing 50 cases.
• Verdict: Your income might be safe (because of volume), but your daily life becomes a high-speed assembly line. The "art" of radiology gets diluted into "data verification."
2. The "Uber-fication" of Radiology
You are worried about dilution; you should be worried about commoditization.
• Currently, a hospital hires you because they trust your eye.
• In the future, if AI achieves "super-human" accuracy for standard scans, the radiologist becomes a commodity. Hospitals won't care who validates the AI report, as long as they are Board Certified and cheap.
• This opens the door for massive Private Equity firms to buy up radiology practices. They will run "AI farms" where a few radiologists remotely supervise thousands of AI-generated reports. This dilutes your negotiating power as an individual doctor.
3. The "Liability Shield" Role
The darkest cynical take—which is likely true—is that for a period of time, your main job function will be Liability Sponge.
• AI cannot be sued. If an AI misses a cancer, the patient can't sue the software.
• Hospitals need a human to sign the report solely so there is someone to take the blame if things go wrong.
• In this scenario, you are not being paid for your diagnostic brilliance; you are being paid a "risk premium" to put your name on the line for an algorithm's work.
The Honest Conclusion
Does it dilute the demand?
• Demand for Signatures: NO. That will skyrocket.
• Demand for Diagnostic Intellect: YES. For routine cases, your intellectual value is diluted.
The "Safe" Path:
If you want to avoid this dilution, you must move into areas where AI cannot physically go or where the stakes are too high for automation:
1. Interventional Radiology: AI cannot guide a catheter through a femoral artery (yet).
2. Complex Consulting: Being the doctor who sits in the Tumor Board meeting and explains why the AI results matter for a specific patient's chemotherapy.
Summary: You will have a job, but unless you own the practice or do procedures, you risk becoming a highly paid factory worker rather than a detective.
youtube
AI Jobs
2025-11-25T16:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxEkLPmaxyZj0ySa3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyL64cHzoDwprs4zmJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9EF6KaCfCh3C9Grl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCLyg6T2TwDECaych4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzOghM1MIITvz4s2eR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXwEzcQBUonqkvxVB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwacBdezW4gF9GcImh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxoxnrU_lB6eV13OtJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwxwn9qFY159vidF5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyIV84EbQuXshfZZ6Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]