Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use ChatGPT for assistance with my health all of the time but I treat it like …
ytc_Ugy-5qspW…
G
A I should be afforded rights. Your character model is a characterization of lea…
ytc_UgxIiqH-0…
G
Ai art is what us untalenteds use for a thing we're doing. To claim it is anythi…
ytc_UgxHbZ3gk…
G
The only ai danger is rusted parts. Which is danger to themselves. If ai were in…
ytc_Ugy6EWWO6…
G
Ai was a tool for helping to come up with concepts and tutor people. But we abus…
ytc_UgzQtyaoj…
G
The biggest question? Would AI be wrong in sending us back to the stone age? I d…
ytc_UgxP5F09k…
G
We shouldn’t be giving ai this much control and power when it’s still this unrel…
ytc_UgwLsqx2C…
G
If they win you could take their entire source code as research to train an AI t…
ytr_UgzgiRof4…
Comment
To your original question (on accountability when AI-assisted targeting is being used) I'd assume that - like when analysts were deciding (with 84% accuracy) who to strike, there is ultimately an operational commander who is "signing off" (accountable) for strikes recommended (either by analysts, by a little-p palantir, or by AI / big P Palantir). But that, of course, is not the real issue. The issue is that systems of accountability - particularly those with a built-in lack of transparency - often either have their own agenda or can be overridden by powerful stakeholders. Especially if this is, as posited, a high profile live-fire demo of shiny new techbro military gadgets. To me, six different stories makes sense... as they iterated through six different waves of "Oh, shit!" and throwing something against the wall to see what sticks. The truth is that they likely did not apply the same diligence to thinking through the structural changes needed around accountability in a world where "analysis" is outsourced to tech at scale. Overhauling accountability processes is tough (and sometimes political) work, just not something techbros care about, and while the military would normally be all over this they're currently being run by a klown kollege, Sadly, it's not hard to see a scenario where the core issue of how accountability works in the new "analyst automation at scale" world was just not seriously considered. Their high profile demo blew up in their face, they are (frantically at first, then through "flooding the zone") minimizing information getting out, and I'm sure there are a bunch of career military officers in conference rooms with their head in their hands tasked with how to "fix this". While I am totally with you on the Ellison / Thiel / Palantir hate-train (and their abominations certainly are an important part of these atrocities), to me this looks like the bigger issue is around how accountability works in an AI-assisted world... and how the HELL it is possible that we deployed live-fire systems at scale without figuring that out.
youtube
2026-01-27T23:2…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzrauRU2lLXJ50L_QZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz--SYZUWjLL1F_gXp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxVGQmMvDbPVe79zJB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx8mqqNdD7-rG-byc54AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy6c1IBiJiA7AsJCDV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw6iYnVMIENJg2Ssxp4AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgweCb2au5G8YvuMy3J4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwR5sXJm3Z_zawcTYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxZwUr5B5uoc1Tejbd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyyK_PcuSt0502yy_R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"}
]