Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Completely understandable concerns. This is why HCP and the public at large need to be aware of what's happening. There are potential solutions already being discussed, but a wider conversation is vital to determine what the best course of action should be. With that said, a few potential options are: - Contractual agreements between AI developers, healthcare institutions, and medical professionals. The terms should clearly delineate the attribution of responsibility and potential liability among the various parties involved. - Malpractice policies for healthcare providers and institutions have to evolve to specifically cover potential issues arising from the use of AI. How it evolves is an option question, but similar evolutions in insurance policies will be happening in all other sectors and industries, not just medicine. - Triage. AI sorts incoming exams into various levels of uncertainty and risk to the patient. The Radiologist then exclusively reviews the highest risk exams. The AI will have given a preliminary diagnosis, which is then confirmed or denied by the physician. - Multiple AI agents. There's no need to rely on a single AI to do the checks. Since AI is digital, thousands (if not millions) of Radiology AI's can check the work of one another, each analyzing the images. If 100,000 AI agents that are better trained than any human come to the same conclusion, the 1% uncertainty and liability can be offloaded to the institution (or whoever is contractually liable) deploying the agents. A combination of some (or all) of these options is most likely. There are 50,000 licensed Radiologists in the United States, and at best each would have a change in their job description. This is obviously a massive undertaking, but everything I've described above is already in development or being discussed at the highest of levels. It's only a matter of time before it becomes reality.
youtube AI Jobs 2024-03-18T14:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwQPLKagYVoUZMGF6x4AaABAg.ASSQMehw6GfASS_13uQoUr","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugx0yZ1hUB-w6dPHSdV4AaABAg.ASSBl0c8DL7ASS_ywDx-yl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugyud_TmxkkoRiE7P5V4AaABAg.ASNh3yFL9xRAU-nnR-icvw","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugyud_TmxkkoRiE7P5V4AaABAg.ASNh3yFL9xRAU314BDHeFi","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytr_UgzzUFTCLn4vCR4gkOF4AaABAg.A17bWao5hj-A17fq1zescO","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugi75uTYKgdOH3gCoAEC.8TH0VipILB68THtQZOgr8U","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugi75uTYKgdOH3gCoAEC.8TH0VipILB68TJlpqrzGnK","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UggzmkDvVJTj0HgCoAEC.8TGQu8c_PR88Tdyc6jT_9Z","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_Ugh0i9klZmEl_XgCoAEC.8TEiu8vHA-k8TnNqEirVHh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugh7duXxNbNLWHgCoAEC.8TEXnaK968E8TEbRV9wA7D","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"} ]