Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A friend is mentally ill but thinks she's physically ill, Chatgpt made her believe that she had a life threatening allergic reaction to her bed frame, without any! Actual allergy symptoms and that doctors aren't well enough trained to see it. In the end she only ate rice and bottled water for days until I made her type up a message of her symptoms, and put it into a new AI chat deepseek, and it said it was most likely anxiety. Her chatgpt knew that she wants to be physically ill, the worse the better, even without her specifically saying so and made her go deeper and deeper into her delusion. We actually aren't friends anymore because she will send you paragraphs of chatgpt, blindly "proving" everything
reddit AI Governance 1762505618.0 ♥ 175
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nnjqdta","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_nnjegc6","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"}, {"id":"rdc_nnjf1rq","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"rdc_nnjg9w7","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_nnkff79","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]