Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>You need to have something to filter out automated bots. As a maker of bots, this is not possible on a production scale. I have no reason to alter the discussions here with brute force, but it wouldn't be anything you could stop if I wanted to do it bad enough. The solution is to do away with OP's idea of "most important ideas are voted to the top" because popularity *does not equal* "most important issue." Asking a large group of people to *only* upvote what's important isn't reasonable on many levels, one of which being there's no degrees of importance expressed. There needs to be a way for people to order their interests - it will take them out of the "up/down vote" mindset and force them to prioritize.
reddit AI Governance 1320021351.0 ♥ 44
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_c2vpel0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_c2vq6ty","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_c2vp3pg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_c2vnx8l","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_k8tzzwi","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"disapproval"})