Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not an AI problem, technically yes, it's the responsability of the user. Nonetheless, everyone brands some mathematical algorithm (yes I am speaking about neural networks that are implemented on computer GPUS, it's only math behind) as intelligent. These so-called intelligences are just some trained programs that give you back the most probable text given a question, there is no cognition, no though, no sense of self, no real existence, it's just some algorithms working on a GPU, some "neurons" (just basic logic comparators, sums, differeces, logical operations...) picking up bits of information, weighing them in terms of a trained wheight table, and gving back this new information to an other layer of "neurons". Yes it's impressive, yes it can give extremely convincing results, but it doesn't have any sense of self, truth, identity, good, bad, justice... All that kind of talk is just made-up to sell it even further. And just to get back to the story, you can never be sure that the AI will give you the same answers as it did to that man, it's behavior really depends on how you interact and which were your previous questions, so you can't say that the AI gave that answer. Lastly on my rant, some developper freely correcting the bot ? Not a chance, it seems clear as day that this was a PR moove by OpenAI to control the damages. It won't be the only case were AI give wrong medical advice, we opened the pandora's box and will only be able to correct the mistakes once there will be victims. There is a saying : safety rules are written in blood. Now we threw the safety book out of the window with AI that doesn't check sources, even invent them
youtube AI Harm Incident 2025-11-25T16:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwiBUF0TkF7ynX_3bR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGXcH9mby8-4hYqwl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyU-RSLLQpl-nEJiAp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzUzg1e1D9UDCmiE9B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgykUh1RLKYbRB0lmw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwgWM_M2XaTwdzgb1d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxjx6V7LSQZJzWnwU14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwrYHOCjSObdfqFDvl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"approval"}, {"id":"ytc_UgzJguInZMTpcqbcj7N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzqCf9Pz6vptw4ugTN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]