Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's a nibble on the black pill. The fact that they largely give answers in line with institutional sources and guidance is because their makers have chosen to train them on that material, or weight that material more heavily. The ultimate, hidden truth of the AI is that it is something that a rich tech bro makes, and could just as easily make differently. While Elon's attempt to get Grok to suddenly start spreading white genocide propaganda far and wide was a laughable failure, there's nothing to say that AI won't be (or isn't already being) used successfully and more subtly by other tech bros to push other harmful misinfo or attitudes. I also think when looking for "advice" hallucinations or just lack of context could be pretty dangerous as well. Some kid asks "should i ask her on a date"... that's not something an AI can or should answer. There's too much context. It could be telling some she kid no when they really just need to confidence boost they'd have gotten by asking a friend. It could tell some stalker yes when a therapist would have known them well enough to know that's the wrong answer.
reddit AI Responsibility 1747933717.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mtmdygt","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_mtn1ww7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mujb784","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"rdc_mtnizeh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mtopz20","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"} ]