Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> Entitlement is not a sturdy defense against progress. "It's progress!" is not justification to ignore the social harms of a technology, either. "Eugenics is progress of a sort, right? It'll create a better future for the generations it births. Why are people so entitled when they protest my work?" should hopefully be shot down, because it's understood that progress in *that* direction is fraught with ethical conundrums that must be handled with *extreme* caution. When it comes to AI, many people are warning that it pushes boundaries that require caution as well, that accelerating too rapidly will cause widespread harm as a side effect of progress. It's something to progress at a more measured pace, paying close attention to the impact it has on society and being ready to slow down or entirely halt its advance at any moment. Besides, progress in content generation doesn't actually improve lives much compared to all of the wonderful things that could be accomplished with AI models built to simulate materials science, biochemistry, etc. Media generation is *lucrative*, though, so it's stealing talent from the rest of the AI industry and setting us back on practical applications. A glorified chatbot won't do much to help against climate change, as it slurps electricity to keep the public entertained, complacent, and distracted. In that sense, it's not progress, it's *regression* when compared to an alternate reality where it's a banned technology. We should focus first on the immediate threats, and on re-structuring society so that people won't be harmed nearly as much by the adoption of media models *before* investing billions in it and unleashing it on the world.
reddit AI Jobs 1713118814.0 ♥ -7
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kzkec4a","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_kzkee45","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_kzl8nw2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_kzk1c27","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kzk3x9v","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]