Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The most amusing thing is watching everyone on LinkedIn try to jump ship to AI r…
ytc_UgwECqJdp…
G
I personally only use ai to vent bc therapy is too expensive in this day and age…
ytr_UgyqoIHJW…
G
The Med School Interview AI Course combines comprehensive guidance with an advan…
ytc_Ugx0NAbI5…
G
The only AI I want is in video games, because drop-shotting a bot is kinda fun.…
ytc_UgyQIojN5…
G
I haven't watched the video and as much as I appreciate that people like Stuart …
ytc_UgwhwYa1m…
G
Chatgpt: "in theory there is an alternate universe, where you already clapped, s…
ytc_Ugz-B-FMJ…
G
i feel like most people dont understand what AI can and can't do. there has been…
ytc_UgwOHQ2Gv…
G
AI by programming definition is a machine capable of predicting what will happen…
ytc_Ugy6RYdhL…
Comment
The speaker did an excellent job of speaking on how artificial intelligence could be a potential game changer in more efficient healthcare in our future, especially in circumstances where healthcare providers struggle to create a treatment plan due to lack of a definitive diagnosis. He uses cancer related problems to discuss how AI can help better diagnose what area should be treated for chemotherapy by acquiring blood samples, diagnostic imaging, as well as other tests and uploading these components into a system that would then generate a proper diagnosis, treatment, and management plan. While I agree that technological advances have drastically changed the way we are able to function in the healthcare field as well as the advanced ways in which we are able to provide better healthcare to those in need, I think it is important to state that AI should only be used as an adjunct and never a replacement. Navid Saidy explains the limitations of using AI including that at the current state, depending on the representative pool of patient’s, there is a high chance of patient bias depending on the data set provided. I believe the biggest concern with some hospitals having AI at the forefront while others do not is the issue of justice. Justice in the context of ethical healthcare is the principle which forces us to look at if something is fair and balanced when it comes to the patient. If we are to look at an individual patient, I believe justice is taken care of. However, would AI at certain locations mean the stratification process of goods and services provided by certain hospitals would change even further in more affluent areas. There is already a clear issue of resources and quality of care depending on whether one is at an inter-city hospital versus a private owned corporation. Of course, these issues will be here whether there is AI or not, but is it just to further create a larger gap between the quality of healthcare provided? Will AI cause those who are in dire need but uninsured to suffer even more? Will AI cause a larger monopoly on the healthcare world and make quality of care become an even more “elite” privilege rather than basic human, right? These are thoughts that came to my mind, and I would love to hear any thoughts? I do agree that we should also look at how much good a system like this will bring before looking at the bad, but in today’s post-pandemic world, it is hard not to wonder how things could be negatively impacted, if at all.
youtube
AI Harm Incident
2023-04-24T00:5…
♥ 34
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzLZDICQoncahhls0F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_1JJeK8TzMzkjy6t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEGDKxxu1yWrFQVnd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyLP9muwFMbN2nQu2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXhvWiHUq0OTWc-0N4AaABAg","responsibility":"clinicians","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzSPzDcK6PFdJ3Oojl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy_HBzp_P0JVbolLNV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMhwgSlKwTbgBuVrZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQJf9HJVirqehJ_IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyYEZm5B8_kno6PlCB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})