Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Enrico Coiera discusses the advances of AI, how it has already led to changes in all sorts of work fields, but now is spreading to medicine. He discusses a group of scientists that already think we shouldn’t train radiologists, as AI can do their job. AI certainly can be a fantastic resource, but it is an algorithm, good at doing “single simple tasks”, as Enrico says. Let’s think about medicine, how often do patients come in with “single simple tasks”? As a second-year medical student, I say more often than not. Doctors are constantly having to work through complex situations to provide better help for each patient. Allowing AI to take over for a human doctor is an ethical dilemma. As a doctor, I would never allow a random friend to take care of my patients and practice for a day, and why is that? Because my random friend isn’t a doctor, he doesn’t know how to help these patients. Sure, he can bandage a cut, prescribe a diabetes medication according to a chart, tell someone to take 2400 mg of ibuprofen per day, but if someone comes in with a deep laceration, with symptoms of a stroke or MI, or a serious psychological issue, he will not be able to effectively help my patient. If he can’t help my patient, my practice is no longer upholding the proper medical ethics of beneficence, or doing good, and justice, allowing equal healthcare to all my patients. It would not be fair for my patient to come into the clinic that day expecting to see me, their long-time physician, and to see my random friend that only knows how to fix single simple tasks, this would be unjust care and would not be doing good by them. It’s the same concept with AI. By allowing AI to take over the role of physician in healthcare, any care beyond the single simple tasks will not be provided. Having a doctor on call to respond to these situations isn’t good enough either. There are situations in healthcare that allow you only moments to act to save a life, or prevent it from becoming severely altered, seconds are precious. It could also be that I’m a doctor on call for a hospital and am helping another patient in crisis, thus I can’t go help that other patient. This takes us back to an unjust system. While AI may be quicker in a lot of instances, Enrico specifically mentions its superiority at looking at images such as X-Ray and MRI scans, this slight increase in efficacy in this one field doesn’t mean we should boot doctors from it entirely. We should be using technology and AI as tools and resources for doctors themselves to use to increase the quality of care, not replacing them with it.
youtube AI Jobs 2023-04-17T19:4… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw0DUFKR0N94-xeQld4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgztZMUS0noSi7sW_614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwvlLRYCYmaK3G8g554AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxqZJ0srBYjRZkl5Et4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxILkSqFFgU0eNu2gV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxVOgAjiiWEukcyXWB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugza7auTDdRmg4N48oh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJwqy5lTlnES_sUIN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgznttYdpv1UTpnn_Xl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzIi6Vpjq57wdrJ9n94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]