Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's impossible to have any system created by man immune to the faults of man. Look at the ideological battleground of big tech: One party says that it's content is being filtered and hidden because of politics. So the operator of that algorithm has to decide how to react. Either they make changes to their processes to address the claim (which adds human politics to the situation) or they do nothing, and are accused of confirming the claims of political bias by the aggrieved party (adding human politics to the situation). Even in a hypothetical scenario where the original algorithm was objectively perfect and unbiased, any external claim of error by humans would add chaos and bias to the *perception* of that algorithm, to the point where it may be abandoned by human users. It's replacement system would then go through the same cycle of introduction > acceptance > widespread use > niche opposition > growing opposition > collapse and disuse. Technology for/by humans can never be perfect because humans can never be perfect.
reddit AI Moral Status 1674148455.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j4y8mbi","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_j4zijki","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_j4ziw8f","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"rdc_j50k5uo","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"rdc_j50y73q","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"} ]