Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I saw a clearly abusive, narcissistic person (as evidenced by her hundreds of insane posts targeting her ex) who plugged in chats between her ex and her into ChatGPT. Her prompt was asking it to point out red flags for narcissism by her ex, which it “did”. It cherry picked phrases or chose them at her specific direction and told her what she wanted to hear. ChatGPT doesn’t have or know how to account for the context of real life, relationships, etc. She posted the chats as “evidence” that her ex was awful.  FWIW, I was an ethicist and worked on AI projects. So many of AI researchers, developers, and users lack or are averse to risk identification and mitigation strategies that could make it safer because they’re personally excited about it.
reddit AI Moral Status 1743849931.0 ♥ 7
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mlig3f9","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_mlihpze","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_mlisduj","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_mli2bj0","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mlhsvtx","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]