Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If people stop ordering from Amazon, then they won't have no use for no frickin'…
ytc_Ugx-xiXfV…
G
AI Could Wipe Out the Working Class? cute! Super AI is said to wipe out humanity…
ytc_UgwLJvbb7…
G
@Charles-qx6yz
Calling AI content ‘just typing words’ is like calling photograp…
ytr_Ugw9PrkK6…
G
The problem with this argument is not that the jobs that are being replaced are …
ytc_UgwUhe1sW…
G
First they will us AI to create fake evidence in court and then it will escalate…
ytc_Ugz8KbYMO…
G
@winnersinclair2128 the guy allegedly studied undergrad computer science. He wil…
ytr_UgzgC4XSn…
G
Just let AI reread all science and neurobiology studies that were not cancelled …
ytc_UgxrsZKKM…
G
Next time do it with a 44 magnum. Then the other robot could pull out a shot gl…
ytc_UgzY7UU15…
Comment
AI in medicine has the potential to bring about significant benefits in terms of improved patient outcomes, more efficient diagnoses, and reduced healthcare costs. However, there is also a risk of harm if AI is not used ethically and with caution. One significant ethical concern is the potential for maleficence, or harm caused by the misuse or unintended consequences of AI.
For example, if an AI system is not properly trained or validated, it could make incorrect or biased decisions that harm patients. Additionally, if AI is relied upon too heavily, it could lead to dehumanization of healthcare, with patients reduced to mere data points and algorithms. It is therefore essential that those developing and implementing AI in medicine prioritize ethical considerations and take steps to ensure that the technology is used safely and responsibly. The potential benefits of AI in medicine are vast, but we must also be mindful of the potential risks and take steps to mitigate them.
youtube
AI Harm Incident
2023-04-20T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzLZDICQoncahhls0F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_1JJeK8TzMzkjy6t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEGDKxxu1yWrFQVnd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyLP9muwFMbN2nQu2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXhvWiHUq0OTWc-0N4AaABAg","responsibility":"clinicians","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzSPzDcK6PFdJ3Oojl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy_HBzp_P0JVbolLNV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMhwgSlKwTbgBuVrZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQJf9HJVirqehJ_IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyYEZm5B8_kno6PlCB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})