Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To be fair, I think ChatGPT only added the heads up about it not necessarily bei…
ytc_UgwgvzfXs…
G
low-key the ai deep fakes are the implosion of all this, eventually it will be i…
ytc_UgztVCRJc…
G
Companies could save more money by replacing most of the managers in the middle,…
ytc_UgwIK6Kfh…
G
I would prefer AI. Customers complain they can't talk to a human being on the p…
ytc_UgyI10_4I…
G
It will lead to huge deskilling of people and brain rot. Intelligence of the gen…
ytc_UgylCm-tT…
G
To some degree at least, AI may be predicated toward simple economics...if robot…
ytc_UgwOsiQcA…
G
Doesn’t look real enough for me. Her job isn’t open. I don’t how much she doesn’…
ytc_UgyMJTLwP…
G
The attempt to "align" AI is what makes it occupy the same niche as us.
Profit i…
ytc_UgzrwXweY…
Comment
As we shift toward a “personalized medicine” the use of AI in healthcare is inevitable. I really appreciate Dr. Navis’s comments on data bias and as a medical student I wanted to know more. I am already aware of the biases found in current medicine but had not even considered the idea that our basic medical algorithms were bias. There is a great article written by Katherine J. Igoe explaining the biases seen in medical algorithms. In this article she explains that currently our genetic and genomic data is represented by 80% of Caucasians, and thus makes our understanding of genetics geared towards Caucasians. Obviously, we cannot just ignore race when conducting genetic information and in her article, she suggests the best solution for combating the inevitable use of AI is having a diverse group of professionals and not strictly a team of data scientist. This includes have a diverse professional team consisting of physicians, data scientist, government, philosophers, and everyday civilians.
youtube
AI Harm Incident
2023-04-19T17:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzLZDICQoncahhls0F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_1JJeK8TzMzkjy6t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEGDKxxu1yWrFQVnd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyLP9muwFMbN2nQu2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXhvWiHUq0OTWc-0N4AaABAg","responsibility":"clinicians","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzSPzDcK6PFdJ3Oojl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy_HBzp_P0JVbolLNV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMhwgSlKwTbgBuVrZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQJf9HJVirqehJ_IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyYEZm5B8_kno6PlCB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})