Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The most present danger isn't 1984-style surveillance. The biggest danger is that existing prejudices are worked into the algorithms, either directly or by way of correlating variables with protected categories (e.g. marking people from certain predominantly minority neighborhoods instead of outright marking those minorities). Coupled with non-existant or fallacious feedback mechanisms, bad coding and self-fulfilling prophecies this can wreak havoc on people and communities. At the same time, the methods themselves are completely shielded from criticism because they often are black box proprietary programs (you can't criticize what you can't see!) and covered with a thin veneer of respectability because people think math is objective (it is, but the way you use math isn't). That's what we should be fighting against first and foremost, not the hyperbolic 1984 level surveillance. We won't reach that stage before we've experienced several layers of society crushing bad AI.
reddit AI Surveillance 1643651019.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi08v2b","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"rdc_oi3i0at","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_ohu35kb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ohw4n4u","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_hv0po6f","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]