Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In a world where money often drives decisions and progress, how can we still fin…
ytc_Ugw7OOHgl…
G
One of the areas we’re seeing this “Automation 3.0”, as described in the article…
rdc_glhxnqf
G
It would be nice to understand how close we really are to seeing these kind of s…
ytc_Ugw9SALxp…
G
well Gregg Braden doesn't think so our consciousness is in a field around up but…
ytc_Ugx3mTha2…
G
I fucked around and pretended too hard, now I'm part of some "AI leader" initiat…
rdc_o8gwgii
G
Good video, I definitely learned some new things. I will say this though, respec…
ytc_UgyNipVgG…
G
they can use a drone to send the pizza to your door, they can send a note to you…
ytc_Ugg17gv7x…
G
While artificial intelligence may not possess wisdom in the traditional sense, i…
ytr_UgylI7wRG…
Comment
AI shows an immense promise in the field of healthcare. At surface level, it could vastly improve health outcomes by ensuring that the most specific treatment is presented to the patient. It allows the provider to synthesize all pieces of information and be given a patient-specific plan that would, in theory, provide the best possible care for that patient. There are inherent issues with the implication of AI in healthcare specifically, though. Diminished privacy, social determinants of health, lack of justice, and lack of non-maleficence are a few ways in which AI could be harmful for the healthcare population. First of all, any time that computers and automatic systems are in place introduces the concern that there will be a breach in privacy. With any system comes the threat of hackers getting the information and leaking protected, personal information of patients. This is a large concern due to HIPAA. Patients expect privacy. They should never have to worry about a third-party system leaking their information. Not only are data leaks a concern, but if AI is constantly learning how to be smarter and better, it is assumed that it is using patient’s information to do so. Next, if AI is to be implemented in healthcare, it must be assured that all patients will have equal equitable access to the services. There needs to be specific processes for all races and backgrounds in order to best serve patients. If this is solely based from one subset of the population, it will introduce new healthcare barriers. We also need to ensure that this is affordable and covered by insurance in order to make it accessible. Lastly, we must ensure that no ethical principles are being broken during the implementation of AI into healthcare. Justice is needed to ensure fairness to all whom benefit from AI in healthcare. Non-maleficence is to do no harm. This is concerning when dealing AI because medicine has been built on physicians who study endlessly in order to provide the best care for patients. While AI might factually think that a treatment is “best” for a patient, it is important to consider that no two patients are the same. We must take into consideration the entire mind, body, and spirit of a patient in order to best treat them. With all of this said, I believe that AI may have a significant future in medicine, but it is equally as important to make sure that it is done the right way.
youtube
AI Harm Incident
2023-03-01T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFvrpemUb6LoHPHz94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyt_-uaiBWtjryyPVJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAn6PuYTD3Sz-3SVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbAtfNW8iuUvysJ414AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx4a-LbU9fjQYOSTg94AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugzm-0fCfSW92DSaidh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGh-VIIU0dWGZjkZx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUxsxqFEopFFeKov94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxfWRar6dRwiAc1VJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwJ8m4vDASuo8VfP2t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]