Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is no doubt advances in technology have improved healthcare tremendously over the years and AI is no different. AI has already been shown to improve healthcare by better patient outcomes, personalized medicine and better access through its many tools. AI can aid healthcare providers in making highly informed decisions about patients diagnosis and treatment options. In the example, cancer is complicated and different for each patient and each specific type of cancer. AI can use data from the patient and other similar patients to streamline resources and give the best possible predictions. The dark side to this and many other technologies is where is the line in the sand? What are the rules and boundaries of this new technology? How do we prevent it from being used to harm patients instead of its intended good? Who or what governing body is going to decide what is okay and what is not okay? Can the AI develop biases over time which would negatively impact care? Who is legally and clinically responsible for healthcare errors when it comes to misdiagnosis, subpar treatment or even death? I think AI shows a lot of promise as a new tool to be used by people of today but I think there needs to be an organization in healthcare that objectively as possible assess the pros and cons, boundaries and limitations and how it is mot appropriately used in this setting. I think through this lens and organization then AI can be a great tool for physicians and other healthcare workers to do good by their patients- to provide creative problem solving to their unique clinical situation.
youtube AI Harm Incident 2023-04-25T02:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzLZDICQoncahhls0F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_1JJeK8TzMzkjy6t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEGDKxxu1yWrFQVnd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyLP9muwFMbN2nQu2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXhvWiHUq0OTWc-0N4AaABAg","responsibility":"clinicians","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgzSPzDcK6PFdJ3Oojl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy_HBzp_P0JVbolLNV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMhwgSlKwTbgBuVrZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzQJf9HJVirqehJ_IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyYEZm5B8_kno6PlCB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})