Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
so the title sounds pretty much anti ai again while HE was the issue. so the title should have been "a man asked ai for health advice, got a good answer but was too dumb to read; this is what happened to his brain" Like I mentioned below in a comment. Gemini (Chat gpt failed in many cases) provided me with so much knowledge about my own health issues that I could prevent some and fix some . the difference is who is behind it (The user). I am very interested in biology and I want to know logically why things happen to understand my own body and how to counter it . som oether people are "dumb" and they don't want to become smart, so they will not ask for explanations in the chat or whatsoever. here in germany where I live the doctors suck. A simple example: 5 doctors laughed at my medical conditions as if I am "crazy" and this is "impossible" (even tho evidence was there) while through gemini I found out it's all cause of psychological issues, and without even telling the Ai what's written on my psychologist-paper as medical evaluation, the chat figured it fast out itself. To understand yourself and know where things come from, you know how to prevent or what to do. doctors here can't do it. they even don't know unless they are specialists that you barely find here. another very simple example: you need to go to 6 doctors, plus endocrinologists, gastro-doctors etc. allergy-doctors-went through all. the max. they would say "if you get itching on tongue you might have allergy". no doctor ever explained to me that that horrendous flushes and migraines after eating come from histamine , and WHY your face flushes, WHAT histamine does specifically and why. That all gemini can provide you in 5 minutes, whole explanations what happens in the body to the core . you wuold never hear it from a doctor because they don't have the time and they are not there to explain things to you. and through that you also get advices on what to do. for psychological questions and decisions and medical ones AI chats are just life saver (aside from practical like my office chair broke, googling gave no results , I just take a photo, sent it to gemini, had solution in few seconds, tried it, it was fixed). however if you are dumb and you can't use your brain, it's your fault. like mentioned in the video here, ai's are not perfect. sometimes you have to correct them if you realize what they say is simply wrong. but the most anti-ai people have not even a clue because they refuse to test out the ai chats per se "just because". the same people that sit on a computer or on a phone wheres back then the olden folks also refused to use that "bad technology". they miss out.
youtube AI Harm Incident 2025-11-24T22:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwMJO_yJEV8kuKfeG54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1zFY7cWJKdBQHv_R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfUzULgC0NZJsn4cF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzw4YsAxyj1U9PIF2h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNr-hTftwY--sLoAJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVqZkln68KxcScVPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwnan3b4112guh5i0d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzm2UiYnpX3zjyYYvV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyFEAPcd0YeKGlBmYh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxxzqJ4NFpO1JtFDxh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]