Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a huge AI hater but you basically tell large language models what to say. It's not some sort of insidious machine, it basically gives out what you put in and the guard rails simply aren't advanced enough for it to *not* do that. Get a LLM with no guard rails, hosted locally, and you could get it to approve of any horrible crime and even recommend strategies for doing it - no matter how bad. It's not that it's not a problem, it clearly is, and people relying on chatbots are only contributing to a more lonely world where people are bouncing their thoughts off a reflector dish that just echoes back to them. I think people need to actually have a better understanding of what these bots are doing rather than speaking to them like a confidant or friend.
reddit AI Governance 1762523150.0 ♥ 4
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nnjkqvp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_nnle4wi","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"rdc_nnlfe57","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_nnjmbz5","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_nnksstz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]