Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I find it so interesting that we constantly talk about America and China as bein…
ytc_Ugz0s_5F0…
G
16 min of bs from a tech bro assuming all jobs are office/computer interaction b…
ytc_UgzjG3YkX…
G
There are a lot longer stretches in the US and less developed areas. We are supp…
ytr_UgzfoGI9W…
G
@rogercarlson6300Because AI art is human expression. It's a collaboration betwe…
ytr_UgwfztFLj…
G
You should definitely make a video on assignments because there is lot confusion…
ytc_Ugwo3vurM…
G
43:25 How this clown thinks he can be 100% certain the next decades of AI models…
ytc_UgwfDe1Ms…
G
Ai helped me and told me how to take revenge from bullies.
It told me about dif…
ytc_UgycoibmU…
G
Bruh. They are marking the meta data that the videos, and images are ai generate…
ytc_Ugz5VObAC…
Comment
Navid discussed how Artificial Intelligence (AI) can improve healthcare and I completely agree. He pointed out all the benefits of AI in healthcare such as the ability to collect more data quicker, does not have unconscious biased, and more accurate diagnoses all of which, would help save lives. I assumed data would be entered into a software from all over the world and able to be accessed within seconds. That is faster and more thorough than any physician or laboratory communication I’ve ever witness for diagnosing patients. One broad example would be COVID; could it have been potentially isolated sooner if we would have known the severity and consequences of the illness when it first began? The downfall of this information being at everyone’s fingertips is when the treatment is unavailable in one country but not another. I imagine that would create an emotional challenge for the physician and patient if there is a known successful treatment yet unable to get access to it however, I feel that could eventually be resolved. Another concern would be the many exceptions we see in medicine. The AI may only output the most common symptoms or treatments which would cover most cases but the few numbers of “abnormal” cases may not benefit from an AI program. It would be nearly perfect if the AI could give an “I don’t know” answer as Navid suggested.
Unlike human physicians, AI will have no ability to create unconscious biased ever. Each patient case will input of their illness or disease, the AI will compare the individual’s information to a large data set and that’s it! It cannot take into consideration the “type” of patient such as a helplessness patient or difficult patient. It cannot see past experiences of patients such as substance abuse. Both of which, I think, can influence a human physician’s opinions and potentially patient treatment. That being said, sometimes knowing the patient’s personality does positively influence the type of treatment that would be best because a compliant patient is better than patient that refuses treatment.
Our world is getting large and populated at a fast rate, and no human can keep up. Although there is no way AI can take over completely. Humans have higher order thoughts and emotions that an AI is far from containing. As long as the AI can continuously gather data from around the world, adapt to the collected data, and there is still human oversight, I think AI would be a huge benefit to medicine.
youtube
AI Harm Incident
2023-03-02T16:3…
♥ 23
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFvrpemUb6LoHPHz94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyt_-uaiBWtjryyPVJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAn6PuYTD3Sz-3SVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbAtfNW8iuUvysJ414AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx4a-LbU9fjQYOSTg94AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugzm-0fCfSW92DSaidh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGh-VIIU0dWGZjkZx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUxsxqFEopFFeKov94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxfWRar6dRwiAc1VJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwJ8m4vDASuo8VfP2t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]