Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more u highlight this, the more everyone will ignore it. As technology advan…
ytc_UgzlAUqyD…
G
Chatgpt below the typing bar says that chatgpt can make mistakes but mistake suc…
ytr_UgywZ_Sgj…
G
Love the channel...but I'm not buying this whole "AI" has emotional insecurity i…
ytc_UgysCpGnF…
G
I asked grok have you ever lied before and the answer I got wasn’t what I was ex…
ytc_Ugzbrg91l…
G
I don't really care about trucking. I don't think we should be worried about peo…
ytc_UgwemIAWV…
G
This is the type of self driving testing we need to see -- kudos to both for the…
ytc_UgyF_FDz_…
G
NO DO NOT USE AI CHATBOTS FOR MENTAL HEALTH. Literally not doing so is better th…
ytc_Ugxt5o5H0…
G
I get where you’re coming from, but just as not adopting new technology can be a…
ytr_Ugx48U8dT…
Comment
You guys do know that Anthropic is ALREADY working with Palantir that works with the US Government/military right?
They've already been integrated for over a year and in use the whole time.
You're not doing any "switching to make a stand" BS. You're just being dumb.
The whole nonsense from the recent headline is just that... nonsense. It's mostly likely the government wants more control over the model with access to stuff Anthropic doesn't want to disclose because of competition or simply not being able to deliver on what the absolute genuines in the government think it can do like real time decision making when it comes to shooting targets.
Not some "we're the good ethical guys" BS.
If you don't want to support an AI company for working with the US government... You sadly will need to stop using AI altogether and move to self hosted open source models to even come close to "not supporting" them.
All of this will change and contracts will be signed once the AI can do what the US Government wants it to do without biting Anthropic's ass when they go back to them with issues like "why can't it do X properly when we agreed on it"...
reddit
AI Harm Incident
1772356181.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o9j3ruc","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"rdc_o810wsu","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"rdc_o80xx1n","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"rdc_o7xeoeb","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_o7ws890","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}
]