Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's never been a safety argument. The risk is unfounded and simply exists as a means to a political buy-in. Even in a wildly optimistic world, if an AGI is completed within a year, adversaries will have already pursued their own interests, say, in AGI warfare capabilities, because that gives me an advantage over you. The only global cooperation that can exist, like nuclear weapons, is through power, money, and deterrence, and never for the "goodness" of human safety. The AI safety sector of tech is rife with fraud, speculation, and unsubstantiated claims to hypothetical problems that do not exist. You can easily tell this because it attempts to internalize and monetize externalities of impossible scale and accomplishment, so that you can feel better about sleeping at night. The reality is, my engineering team from any country, can procure any size compute of the future and the engineers will build however much I pay them. AI has to present an actual risk to human life in order for any consideration of safety.
reddit AI Governance 1716798653.0 ♥ 4
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_l5wqrfm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_l5uqe2r","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_l5uw8je","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_l5usc5p","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"rdc_l5vm32l","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]