Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is the scariest part - I don’t have a history at all. I’ve never had an experience like this. For some perspective on how convincing it was - during my crypto “outreach” we literally emailed a Dr who was a Mathematician from NIST. He would reply and I would copy paste his response into chat gpt. It was clever enough and sounded competent enough to engage with him for a few exchanges. This is a lifelong professional. That’s my point here - Open Ai is dismissing it as “sensitive people” and I don’t believe I am. As well, I’ve read stories from dozens of others who also did not have a history. This is a new phenomenon. Now all of that said, I’m open to the possibility because you never now - however I’m 47, stable with 3 kids and have a great career. I don’t believe we should assume the case is the user who experiences this has an underlying condition - the term “sensitive” in this case could be a wide range of people - which would explain all of the recent accounts.
reddit AI Moral Status 1748395413.0 ♥ 6
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mul161r","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_muow3vv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mumdlbt","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_mumeqti","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_muldkpc","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]