Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That, combined with the fact that the training sets are not all encompassing either, and a bunch of time poorly selected, makes the problem even worse. The classic example of this, is if your training set for a face recognition system contains mostly white faces, then the machine learning model will learn to recognise white people, and start to fail spectacularly with anything else. (The article kinda mentions this also.) And if we add on top of this the fact, that we technically don't fully know how specific ML models reach their conclusions, therefore we cannot validate them properly, we can start to wonder what good can systems like these do.
reddit AI Harm Incident 1576171595.0 ♥ 16
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_fakmnp3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_fakqrdm","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_fanio3o","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_fal9bvn","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_fal61p2","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]