Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I hate this argument. No, we COULD put it back into the cage. It's just extremely hard with the state of the world as it is right now. Ignoring the bureauceatic nightmare of modern governments, people themselves just are far more pessimistic and apathetic about the future and incurring change than they were like 100 years ago. We are no longer the type of people to demand revolution, we are now the type of people that complains online and doesnt bother doing anything about it. Beyond that, the blind acceptance and accelerationism of illusory progress is also maddening. The industrial revolution was a very rough period for millions. So much suffering. Yet in the end it produced a positive outcome. But thinking that the end justifies the means is fallacious—why is the suffering meaningless? The utilitarian measure would be to slow it and make sure development is as safe and adjusted as possible, rather than just diving headfirst and not caring if you end up ruining the lives of a couple billion in the process. Being careful and slow with AI is the utilitarian response here.
reddit AI Governance 1680087175.0 ♥ 42
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_je4kq0z","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_je4irxd","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_je4wqct","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_je4m8j3","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_je4hkk0","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]