Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not self-aware. This has already been proven by Roger Penrose in the book …
ytc_UgyZ7KxMg…
G
@artorhen if we bring up two images and without any context you won't be able to…
ytr_UgyJiyNMh…
G
I love this sm, it's a shame people will use these amazing pieces to train AI 😭…
ytc_Ugxrk43J9…
G
To Chat GPT:
"Please make a caption more "Scrooge money duck evil villain" than …
ytc_UgxnFcwip…
G
Yeh unlogo k reality h ho ladkiyo beautiful bolti h usko bhi chamada nikal do ai…
ytc_UgwRrvgjp…
G
ai is fraud when it comes to art because the person doing ai art is stealing mul…
ytc_Ugz5UF_gQ…
G
This guy is under the impression that AI means “Afternoon Intake,” and we both k…
ytc_UgyxEY01k…
G
He didn't say that. He said that's a probability if AI safety won't become a pri…
ytr_Ugx4BqeuM…
Comment
The Pentagon's three threats are mutually exclusive. Terminate the contract, designate a supply chain risk, AND invoke the DPA? Termination and supply chain risk both mean "we don't want you." DPA means "we're forcing you to work with us.
**This is a pressure play.**
A former DoJ-DoD liaison flagged this exact contradiction publicly, calling the supply chain risk designation "punitive" rather than legitimate. That's a pretty damning read from someone who used to work the DoD liaison role.
The Defense Production Act was designed to compel manufacturing of commercially available products during emergencies, i.e. ventilators, masks, vaccines. Anthropic's classified deployment is custom-built software tailored to sensitive government use.
That's a very different legal question. If Anthropic challenged this in court, the government would need to justify that a Cold War-era manufacturing law applies to compelling a company to remove safety guardrails from bespoke software. Untested legal territory and the government likely knows it.
The Pentagon admitted on the record they need Anthropic and they need them now. Claude is the only model on classified networks. There's no backup. So why issue a public ultimatum you might not be able to enforce, against a company you can't replace? A few possibilities...
One - this is theatre for a domestic audience. The administration has been framing AI safety as "woke" and this positions them as tough on companies that won't comply. Sacks has already laid the groundwork for that narrative.
Two - it's a negotiation tactic. Set an extreme deadline, make extreme threats, then settle for something in the middle - maybe Anthropic agrees to loosen some restrictions short of the two red lines, and both sides claim a win.
Three - it's genuinely about establishing precedent. If the government can compel an AI company to remove safety guardrails via the DPA, that's a template they can use on any tech company going forward. The specific dis
reddit
AI Responsibility
1772002695.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o792oiw","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_o798zf9","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"rdc_o7ab0zw","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"rdc_o7bk48a","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_o7c1d5v","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]