Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Computer science experiments on millions of people may not produce output that is desirable for all users. Being disadvantaged by an imperfect system is different to deliberate malfeasance. Its like the difference between a race-based comment and a racist comment. Its deliberate manipulation or neglect by intent, that we should be concerned with. If the code powering the AI was open sourced and decentralised this wouldn't be a concern. We should be moderating content ourselves using our data and the worth that we can put into system directly by monetization, especially now we have blockchains. Systems are already being developed and the article covers testing. One concern I have is the exclusion of elderly or people who are otherwise excluded from some technology due to lack of hardware. Another concern is that a lack of social media profile for prospective employers to check results in automatic exclusion.
reddit AI Harm Incident 1576189062.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_fal82l6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_fals3h0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_fal5f0n","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_falr551","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"}, {"id":"rdc_fam8qex","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]