Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The essay isn’t a review of AI security literature because it isn’t making an AI security argument. It’s making a philosophical argument about the conceptual structure of alignment and containment—specifically that they cannot be treated as independent problems if the entity in question is a Gewirthian agent. That’s a claim about the *logic* of the situation, not about current engineering practice. On urgency—the essay agrees with you that the question is conditional. But “not urgent now” is doing a lot of work. The argument is that if we defer the philosophical groundwork until the question *is* urgent, we’ll be trying to build the framework under crisis conditions with an entity that outmatches us cognitively. The point of working through the conditional now is precisely so we’re not improvising later. On the conflation charge—the argument is that these *cannot* be cleanly separated if the entity is an agent, and that treating them as separate is what produces incoherence. If ASI is not an agent, then yes, they’re fully independent: one is engineering, the other is speculative ethics, and there’s no wreck to speak of. But if it *is* an agent, then your containment solution implicates the moral framework your alignment solution depends on. The “smashing together” isn’t something the essay does to the problems—it’s something the essay argues is already the case under the agency condition. Whether that’s profound or not is up to the reader, but it’s simply the thesis of the piece, not some slapdash conflation.
reddit AI Moral Status 1775223214.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]