Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So. . . Personal story time. I have used AI successfully and unsuccessfully to deal with medical issues. Maybe "deal with" isn't the right word, so a better way to describe what I've done would be "get a diagnosis" Let me explain the unsuccessful first. I have had the symptoms of either a UTI or STI for years, but didn't have either one. My doctor couldn't figure it out because every lab test came back clear. And then the plandemic made going to the doctor a nightmare so I just lived with my symptoms for a time. After the world stopped losing it's mind I went back to my doctor because things had continued to worsen. Running all the same tests again revealed nothing. We even ran extremely rare urinary parasite tests because I had visited rural North Africa 12 years before. Nothing, absolutely nothing appeared. While waiting for the referral to urology to go through, I turned to AI to see if I could figure out what was wrong. And basically, because we had seemingly ruled everything else out, it was telling me I likely had cancer. Now this didn't exactly fit my symptoms but all the more precisely fitting diagnosis had been ruled out already. I was willing to consider things that were at least in the ball park of what I was experienced. It didn't help that my mom actually was diagnosed with pancreatic cancer around that same time! Then I started having severe pain under my right rib which ultimately landed me in the ER (never been before!). After many, many tests, they found. . . nothing. Not my intestines, liver, lung, appendix, gall bladder etc. Everything was perfect. Again, I turned to AI to see what was wrong and the only answers I got were. . . incoherent. Nothing fit. But the pain was real, and the UTI that wasn't a UTI kept getting worse. I finally started having incontinence. I can tell you, peeing your pants as a grown man in your 30s is more humiliating than you can even imagine. All my searching on the Internet got me no closer to discovering what was wrong. I got no good answers until I saw the urologist. It took literally a split second physical exam for him to diagnose me. I had/have an extremely rare combination scar tissues caused by an autoimmune disease. Less than 0.01% of men in my specific demographic have my diagnosis on a cursory level, and on a detailed level it's largely unheard-of .Massive amounts of scar tissue had formed in my urinary tract (my body attacking itself) blocking the flow of urine and producing huge amounts of back pressure and pain, including the pain under my rib which was from a pinched nerve in my pelvis. The urologist said if I was 30 years older I would have already been in renal failure and would be there within a few years if I didn't have surgery. So, to wrap up: got surgery, might have to have another. But things are overall WAY better. And it's great to know that I was the reason my wife was having trouble conceiving. Which, first try post op and she's preggo! Amazing!! So what the point? AI helped 0% in figuring out what was wrong. In all of my plain old googling and AI searching there was not a single result that even hinted at the actual problem. AI was totally unsuccessful. What about my success story? Well, let me keep it short. I have had a suspicion that the amount of cardiac arrhythmia and tachycardia I experience was abnormal. And after using AI to search for likely causes for those symptoms it suggested POTS as the likely diagnosis. I took this data to my doctor and he is pretty sure that I do indeed have POTS. We are in the process of ruling out cardiac issues (remains to be seen) and I don't have a diagnosis, BUT using AI made me sure that I should talk to my doctor about it. It made me sure that I wasn't being a hypochondriac and it was worth getting checked out. My doctor took it very seriously. But without getting the results I did from chat gpt I didn't know that I would have brought it up. If you use AI appropriately it is a wonderful diagnostic tool. But using it appropriately entails seeking the help of a human being who actually has the knowledge and wisdom to really help you. It can help you decide what questions to ask or encourage you that maybe something is wrong and that you're not crazy. I was told by our family doctor as a teen that my heart is fine, which maybe it is. But even as I've had significant cardiac arrhythmia, extreme lightheadedness and tachycardia because I was told "your heart is fine" that's what I've told myself. Having an AI chat bot to bounce symptoms off of and get statistical data, condensed clinical info etc can help you take the next step and talk to your doctor with more confidence. All in all, use it! Just don't think you can figure it out alone, use it to help you get help from other people. And preferably not one of those "it's the seed oils!" people online 🤣 Anyway, maybe the above will help somebody.
youtube AI Harm Incident 2025-11-27T06:1… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwmU2vHgr9ySgFRCXJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzeV3_mW7I-JaAc2rp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxvsPtHmVKPf1iouZh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfSB4K1blMUWCkkUt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzHujz0szacMPjScL14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzYBlahzBf18ZhOBdp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzwroNH5XiXLxzzJ714AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy8ZQ45SKrLzu8aMAJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzGgvh_ROFM5bvWUNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyUQxQajXI24-vbk1J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]