Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem is often that the data set is biased. Racist people discriminated ag…
ytr_UgwInz7Pk…
G
We gotta take him seriously, AI research is his neck of the woods, some necks le…
ytc_UgzJ1C5d-…
G
Agree, I've been doing it since my first interactions, I realized AI is a reflec…
ytc_UgwwKaOlL…
G
Designers will take a back seat and use AI to create the imagery for them. The o…
ytc_UgxPiAr9I…
G
A few hours ago ukraine officially deployed it's first fully autonomous attack d…
ytr_Ugz1b6hJ0…
G
Right but Ezra couldn't respond to his points either. Just more hoping that the …
ytr_Ugzt0860S…
G
Lifelike/realistic porn fakes of celebs have been thing since photoshop and prob…
rdc_kjkb4tj
G
It isn't full A.I Art. Angel Engine has a script written by an actual person but…
ytc_UgwkncYnl…
Comment
This is very interesting and I am excited to see what this has to offer. There are so many pros to this type of technology and as was mentioned, there are a lot of cons as well. It is so important that this stays highly regulated. One of the biggest issues that I could see arising are rising out of this situation is the fact that AI technology is so new. There are new problems found within the technology all the time and we are discovering new things about it every single day. The reason this is an issue is because of the consequences. When chat GPT makes a mistake It generally does not mean the life the life or death of a human being. Whereas with this technology It can very easily turn bad quickly I feel as though there needs to be more time spent in the and the world of AI before we jump to using this in A real life setting. As an example I think it would be good to use this alongside a doctor for a minimum of five years. See exactly what the doctor recommends and then compare that to what the artificial intelligence was recommending. The success rate needs to be almost perfect and these types of scenarios. Another issue that I see is in liability. If the artificial intelligence recommends certain treatments or diagnoses a patient, who is going to be liable when things go south? Is it going to be the doctor in charge because he should have known better than what the A I was saying Or is it going to be the company that generated and created the AI? Both would have strong arguments as to why it should be the other end and I feel as though this could leave the patient in a position where they cannot receive the compensation or seek justice as needed. Lastly, artificial intelligence is created by a company. For-profit companies are created to do just that, make a profit. If there are companies that are competing to have their artificial intelligence working in certain hospitals, who is to say that there will not be shortcuts taken or poor leadership that leads to disasters within the company that leads to disasters within the healthcare system. I feel as though a lot of these points that I brought up are very critical to think through before this type of technology becomes the norm. I’m sure this has been discussed many times with others but for the future of Healthcare I do hope that it is in the right hands. While a lot of things I said were geared towards the negative, I really do hope that we can see this technology working flawlessly in the future as I think it has great potential to do amazing things.
youtube
AI Harm Incident
2023-04-24T01:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzLZDICQoncahhls0F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_1JJeK8TzMzkjy6t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEGDKxxu1yWrFQVnd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyLP9muwFMbN2nQu2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXhvWiHUq0OTWc-0N4AaABAg","responsibility":"clinicians","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzSPzDcK6PFdJ3Oojl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy_HBzp_P0JVbolLNV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMhwgSlKwTbgBuVrZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQJf9HJVirqehJ_IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyYEZm5B8_kno6PlCB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})