Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I get it. Some thoughts are really hard to discuss with a real person, because they bring too much of their own biases and feelings into it. I have the same problem with topics of sexual assault, it's not something I want to burden a friend with and it's a little uncomfortable to have them know this about me. And therapists are not always equipped to handle this well, I've had one actually make it worse. Just be aware that you're not actually *talking* to the AI. There's nobody on the other end, you might as well scream it into the void or write a letter and burn it. Both of which can be therapeutic, so I still get it, I've done it too. You just have to keep in mind that it's an illusion, a reflection of what it thinks you want to hear. The AI doesn't have opinions. You could recreate your conversation in a new tab and get totally different advice the next day. Using it as a sophisticated rubber ducky to analyze your *own* thoughts is a good idea. Thinking it has novel thoughts of its own is dangerous.
reddit AI Responsibility 1747924593.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mtnuhq7","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"rdc_mtofcmm","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mtmsc5o","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_mtnjv9l","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"rdc_mtyjcgp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]