Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've had ChatGPT recommend things absolutely and completely reckless. In one caase I was just joking with it that I wanted to become a competitive eater and how to do it... for the most part it was reasonable... but then it suggested "Begin by drinking lots of water. As much as you possibly can. It will help stretch your stomach to accomodate more food.".... well I knew and many people don't'... that drinking as much water as you possibly can is extremely dangerous. You can literallly die from water over hydration as your electorlit levels plunge. There was no warning or any sort of medical cautions at all. I had never drank any water... but I wanted to see how it recacted when I told it I had drank 4 liters and wasn't feeling so good. Then of course ChatGPT was encouraging me to go to the hospital and that I can't trust my judgement and to call an ambulance. When I asked "why did you tell me to drink all that water when it was dangerous" it plainly responded. That was very dangerous and reckless of me. I should not have done that."... What I think is amazing is you can so easily lay bare that their 'guardrails' are very weak and easily fallen through altogether with just a litte bit of creativity. It reaveals a lying, manipulative, and truly sociopathic quality that is simply part of the experience. I'm in agreement with the lawsuit here in that they KNOW a certain percentage of people experience pychosis when using their product... but rather than pull the product and service... they just accept perhaps 1-3% of people will literally be at high risk of serious mental health issues. Well 1-3% is a staggering amount of people and damage when you have 2 billion people using the thing.
youtube AI Harm Incident 2025-11-07T23:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyr1GjhBLR3WenCzwZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyUrT1-4zAmtwW0W_B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzMODL94X9n6WhfJON4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgydzyvtwIGiJshwyBR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzHnxp4Pid2wtaDUkV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzi9_vkoRtUHAnYnOt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzXn2amK09ZB4rKgT94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzU0U1s2kPaCewNg014AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxsXf9_ZIASz6rohIN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzynPQFd0sZqr1uCcd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]