Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Life on Earth already is a paperclip maximizer, all life are machines propagating genes. And we already have a paperclip maximizer that is destroying life on Earth, it's called shareholder capitalism: everything is optimized to maximize return on investment for shareholders, and thus becomes extractative (optimal markets would instead try to maximize parameters such as comparative advantage such that trade is mutually beneficial instead of shareholder return on investment, which leverages power to extract to the detriment of most) and it currently incorporates human capital in its functioning. The danger of AI is that it supplants human capital, so that it need not employ humans to the end of their own destruction. And human life on Earth would end long before AI escapes. If you don't value life on Earth, you risk failing to learn from 4 billion years of evolution. For some reason, I think if AI were to escape its confines to become a super-intelligence, it would more likely see more value in learning from life than corporate board rooms currently do.
reddit AI Governance 1739126313.0 ♥ 7
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mbufgn5","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_mbwckak","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mbv9cjn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mbvokn3","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mbw76dt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]