Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just wrote a comment about this describing what I found when I investigated: This type of structure is very common in academic writing  and marketing copy.  It is based on dialectical argument:  thesis-antithesis-synthesis.    Academic:  (Thesis) The tragedy in Romeo and Juliet is caused by the lovers’ impulsive passion.  (Antithesis) Their families’ ancient grudge, however, sealed their fate from the start.  (Synthesis) The true disaster is the collision of the two, as the lovers’ desperate choices were a tragic product of the hateful world their elders created. It and related rhetorical devices are also common in marketing and advertising materials: “They melt in your mouth, not in your hands” So your llm finds this all over the place and it gets overfitted in its training data.   It is trying to sound both sophisticated and persuasive, and because it is much better at pattern recognition and production than actual abstract reasoning, it thinks it hits a hole in one every time it trots out this tired, lazy rhetorical device. The concept came from Greek rhetoric.   Hegel picked up on it, then Marx took it from Hegel and turned it into dialectical materialism: The struggle between the bourgeoisie (thesis) and the proletariat (antithesis) leading to a new societal form (synthesis) You’ve got the same basic structure used in history, sociology, literature, and all the liberal arts subjects, as well as formal debate, advertising, and marketing. And I almost forgot: Coding (Thesis) Requirement A (e.g., speed) and  (Antithesis) a conflicting Requirement B (e.g., efficiency) are resolved by  (Synthesis) the final algorithm that balances both. So it keeps coming across this structure in all these different disciplines. It’s no wonder it thinks it’s the best thing since buttered toast. This was written by me.  I used an llm to help me with the examples, but any blame for mistakes is all on me.
reddit AI Harm Incident 1750109581.0 ♥ 8
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_my4ygn6","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"rdc_my59rqb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_my5yzfp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_my5kmbr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_my4rdo9","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}]