Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Neural transduction theory is my favorite. The mind as a transducer of conscious…
ytc_UgyVCWbBc…
G
it is ironic this program keeps getting interrupted by a.i. commercials, telling…
ytc_UgzuIzVeO…
G
AI generators aren't television or TikTok. They don't just throw random images …
ytr_UgzXOPMfG…
G
@orathaic I think you're just regurgitating the same points I've heard a thousan…
ytr_UgxUjihp7…
G
they would never need rights. everything a robot does is pre programmed. if(this…
ytc_UggQo43e5…
G
I'm curious as to what you think about AI image alters when used for artistic en…
ytc_Ugx3c_geX…
G
@AnDr3w066 Andrew, are you gonna argue or just say the same thing? Because i kno…
ytr_UgyjmaSyt…
G
No I think the AI might have hit on to something maybe it looks at a social stat…
ytc_UgyfZvoIV…
Comment
Except way, way dumber. The AI in war games was an example of traditional reinforcement learning taken to the extreme- it could discover inconsistencies in its own understanding, design tests, acquire new knowledge, and extrapolate that knowledge to other scenarios, while operating with an overarching goal to focus its actions.
A transformer model (what LLMs are based on) is fundamentally incapable of this kind of learning, no matter how big you make it. The fact that the military wants to use LLMs to decide who to kill is fucking terrifying, not least because it shows that the people running the show have no fucking idea how the technology they're using works and what its limitations are.
reddit
AI Responsibility
1771981333.0
♥ 1585
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jkrf68b","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"rdc_jksdu2y","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_jksupl6","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_o788tt3","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_o78s7wc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]