Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is, we can’t always just fail them, without proof of AI use, and according to most academic integrity offices, the only way to have proof is if they have false references or no references. I’ve submitted papers to academic integrity that were copy/pasted bulleted ChatGPT lists. I was told the most I can do is subtract points for the parts that do not live up to the rubric. But, I can’t just give them a zero for AI use- that would get me into a whole heap of trouble. So they may get a crappy grade, but they don’t usually fail the course, and they KNOW that, which is the disheartening part.
reddit AI Harm Incident 1765768801.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_nu37fal","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_nu6ruz7","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_nu26fqg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_nu2kz7h","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"rdc_nu3g7k8","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}]