Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Serious answer to this question: the emotional depth that an LLM can connect with people is astounding. That makes it primed for abuse. Think of the misinformation and manipulation that goes on with advertising, social media campaigns, subtle slants in newscasts to get people to act against their own self interest. This can amplify that a thousand times. Nudging the weights of a model through selective training can and will have real societal effects. Now I don't *think* that's happening yet, but who knows. But without some kind of regulation around transparency of training and a population that is intentionally training to watch for cognitive leading (LLMs do this by design, but can also suggest ways to spot it and manage it) and amplifying biases, we may go down a very troubling road.
reddit AI Moral Status 1743842859.0 ♥ 12
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mlig3f9","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_mlihpze","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_mlisduj","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_mli2bj0","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mlhsvtx","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]