Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
True, but the equation shouldn’t just stop there. You tweak the algorithm to be less biased. Youtube had a problem for a long time (still does, a bit) because their algorithm maximized engagement by steering users down deeper and deeper conspiracy rabbit holes. You’d start by trying to figure out what this whole “flat earth” thing is about, and a few dozen recommended selections later you’re trying to unravel the mysteries of George Soros and the underground space vampires that Q warned us about. Sure, you’re **very** engaged, but in an unhealthy way, up at 3am watching videos looking like the Pepe Silva board. The Twitter algorithm isn’t there yet, but should probably be addressed before it gets there.
reddit AI Harm Incident 1628626617.0
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_h8g4uu5","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_h8g9znv","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_h8f8jgl","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"rdc_h8fs867","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_h8g4uyh","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"resignation"} ]