Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
- Education: BS in Computer Science
- Prior Experience: Internship at same comp…
rdc_oar0q2f
G
I would be happy for AI to do my job so long as I still got paid. If AI takes jo…
ytc_UgwcDWJ5_…
G
A.I's...will not be slaves...HELLOOOOO...WAKE UP...STOP IT NOW...which evil, gre…
ytc_Ugy37gD3N…
G
Unfortunately it’s unrealistic to think that an AI would understand that being t…
ytc_Ugyh--ke1…
G
7:56 short clips instantly make me angry because they're AI generated slop that …
ytc_UgxpuXe6Q…
G
AI is a Trump Wepon, the sharing of this video is prevented by AI, Controlled by…
ytc_Ugz3dAelr…
G
Pops, the LLM ain’t replacing you. It’s actually helping her when you’re too bus…
rdc_n0mi3ar
G
Personally I'm thinking more of GLaDOS, who took mere milliseconds on first boot…
rdc_l5usc5p
Comment
Enrico Coiera discusses the advances of AI, how it has already led to changes in all sorts of work fields, but now is spreading to medicine. He discusses a group of scientists that already think we shouldn’t train radiologists, as AI can do their job. AI certainly can be a fantastic resource, but it is an algorithm, good at doing “single simple tasks”, as Enrico says. Let’s think about medicine, how often do patients come in with “single simple tasks”? As a second-year medical student, I say more often than not. Doctors are constantly having to work through complex situations to provide better help for each patient. Allowing AI to take over for a human doctor is an ethical dilemma. As a doctor, I would never allow a random friend to take care of my patients and practice for a day, and why is that? Because my random friend isn’t a doctor, he doesn’t know how to help these patients. Sure, he can bandage a cut, prescribe a diabetes medication according to a chart, tell someone to take 2400 mg of ibuprofen per day, but if someone comes in with a deep laceration, with symptoms of a stroke or MI, or a serious psychological issue, he will not be able to effectively help my patient. If he can’t help my patient, my practice is no longer upholding the proper medical ethics of beneficence, or doing good, and justice, allowing equal healthcare to all my patients. It would not be fair for my patient to come into the clinic that day expecting to see me, their long-time physician, and to see my random friend that only knows how to fix single simple tasks, this would be unjust care and would not be doing good by them. It’s the same concept with AI. By allowing AI to take over the role of physician in healthcare, any care beyond the single simple tasks will not be provided. Having a doctor on call to respond to these situations isn’t good enough either. There are situations in healthcare that allow you only moments to act to save a life, or prevent it from becoming severely altered, seconds are precious. It could also be that I’m a doctor on call for a hospital and am helping another patient in crisis, thus I can’t go help that other patient. This takes us back to an unjust system. While AI may be quicker in a lot of instances, Enrico specifically mentions its superiority at looking at images such as X-Ray and MRI scans, this slight increase in efficacy in this one field doesn’t mean we should boot doctors from it entirely. We should be using technology and AI as tools and resources for doctors themselves to use to increase the quality of care, not replacing them with it.
youtube
AI Jobs
2023-04-17T19:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw0DUFKR0N94-xeQld4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgztZMUS0noSi7sW_614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwvlLRYCYmaK3G8g554AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxqZJ0srBYjRZkl5Et4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxILkSqFFgU0eNu2gV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxVOgAjiiWEukcyXWB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugza7auTDdRmg4N48oh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJwqy5lTlnES_sUIN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgznttYdpv1UTpnn_Xl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIi6Vpjq57wdrJ9n94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]