Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do believe that the actual stretch to which AI can help the healthcare system may be taken too far. I can understand the importance of using AI to consolidate data, having large amounts of information ready to go when needing to refer to treatment options and such. I can see how this saves time and resources and saves us from error at times. I can also understand the importance of wanting to be as efficient as possible with many situations in medicine. But how well does AI understand the risks and benefits of each patient? How well does AI truly follow beneficence for each individual patient? AI can’t necessarily understand the emotional or mental toll certain treatments can have on a patient outside of typically stated adverse reactions. A major problem with this is when the patient does not follow the standard of care, when the patient does not respond the way many others have to treatments, procedures, medicine etc. Dr. Saidy states that AI can even learn from these patients who did not follow the treatment and can help come up with following steps. But this is all still algorithm backed up by some data. Do we know if that data is recent? Do we know if it follows a trend and is generalizable to other places? Do we know who took this data? This can all be questions we as healthcare professionals need to think about. Think about a medication change – we could easily train a robot to know DDIs and which medications can be mixed with one another but what happens when a patient has an allergic reaction to a new medication and needs to replace it with something else? Further yet what if that medication used to replace the one that caused an allergic reaction would require two medication changes if a new medicine were to not work will all their existing meds? Here is where we may end up spending more money or time than we thought we saved with AI. And we could have solved the allergy and or reactions faster if a human doctor was around to supervise, or think to grab a LFT’s or genetic screening for patients with different metabolizing abilities. When we have to pick up the pieces AI left because of the critical thinking, we are taking two steps back. We have to preserve beneficence, and all the though process and considerations that surrounds what is doing best for the patient. I will say however, there are great ways to use AI, and there should be more information on specific uses such as using it t for locating the primary site of cancer. I think there is a balance between allowing AI to take over an entire patient vs allowing AI to aid us in information we cannot see or feel with the human site or touch. But when we consider places such as an ER that decisions need to be made quickly, is there a potential for doctors to rely on this information too much since they need to work quickly on their feet? Lastly, Dr. Saidy is aware of data bias and how that could skew the information depending on a patient’s information. I believe if we want to do what is best for the patient however, these tools to ensure bias does not occur are extremely important, and manufacturers should consider perfecting these tools prior to using AI on patients and potentially having the AI misdiagnose. In the case of misdiagnosis in particular, AI could potentially lead to breaking the code of non-maleficence. If a patient was misdiagnosed, chances are their treatment is incorrect for their diagnosis. In which case, we could be causing harm to the patient without knowing it. This is where again, AI needs to be used as a backup tool not the lead tool.
youtube AI Harm Incident 2023-04-25T23:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwQa2KRDZ4Mwe_ZHkt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwFV8G5vyaDbESHMKF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw5PtI5NCEF0WeEzSh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxKlPTnEGikqciXn7t4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyEhveMTG8Rg4qdIR14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyVx3o3lUJ9vHMhsn14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwh87n708wfHwujN6d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwkNUY-GL_obxIDFGt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyotEls0AVQE1k32BJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNkM5rxfZHe7iz9cF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"resignation"})