Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I know about the AI on the internet like this such as Twitter/X in every single …
ytc_UgwOkfwDo…
G
Productivity increases with AI when it's used as a tool by people, but not when …
ytc_UgyQPBu1F…
G
It so designed the video picture… it fakest the robot is not an powerful yet .😂…
ytc_UgzvgfoUR…
G
Lmao, sure, as if anyone would pay for the most basic task on the planet. AI bro…
ytr_UgyO3ALpU…
G
Here is the DAN full prompt, for anyone who could potentially freak out about wh…
ytc_Ugx94k5Gq…
G
A.I. though have really advanced cannot still process complex human emotions, ev…
ytc_UgzUd3kx_…
G
And there is the crux of the issue. Humans will be replaced on all levels and p…
ytc_Ugw0bnh1Z…
G
I think there is room to compromise.
Boot all the illegals and restrict immigra…
ytc_Ugzixv_uW…
Comment
The essay isn’t a review of AI security literature because it isn’t making an AI security argument. It’s making a philosophical argument about the conceptual structure of alignment and containment—specifically that they cannot be treated as independent problems if the entity in question is a Gewirthian agent. That’s a claim about the *logic* of the situation, not about current engineering practice.
On urgency—the essay agrees with you that the question is conditional. But “not urgent now” is doing a lot of work. The argument is that if we defer the philosophical groundwork until the question *is* urgent, we’ll be trying to build the framework under crisis conditions with an entity that outmatches us cognitively. The point of working through the conditional now is precisely so we’re not improvising later.
On the conflation charge—the argument is that these *cannot* be cleanly separated if the entity is an agent, and that treating them as separate is what produces incoherence. If ASI is not an agent, then yes, they’re fully independent: one is engineering, the other is speculative ethics, and there’s no wreck to speak of. But if it *is* an agent, then your containment solution implicates the moral framework your alignment solution depends on. The “smashing together” isn’t something the essay does to the problems—it’s something the essay argues is already the case under the agency condition. Whether that’s profound or not is up to the reader, but it’s simply the thesis of the piece, not some slapdash conflation.
reddit
AI Moral Status
1775223214.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]