Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Three thoughts: 1. The humans trained to grade the essays are trained to *think like the computer*. The humans are evaluated by how well their scores match the AI, and not the other way around. The humans learn to mark down creative uses of language, ignore ideas, circular and asinine arguments, etc. 2. If you are a legislator, you are failing your duty if you are not allowed to sample papers and ask *why* the student got the scores that they did. "Why did this paper get a 3? What aspects, specifically, got marked down?" 3. The best students are the most harmed by this. There is a point when great writers learn to *break* conventions. Arguments that follow the intro->3 supporting points->conclusion organizational structure are left behind for more advanced structures, and creative words are formed for emphasis (greyish and sunburnt fail my spellcheck, as quickly made up examples.) The most creative, compelling, inspiring writers have those traits ignored while the most technically accurate writers score the highest and are treated as the elite.
reddit AI Harm Incident 1566316754.0 ♥ 17
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_exhshyw","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_exhxyom","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_exhuddc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_exhued5","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_dtxlv98","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]