Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thinking chains don't break after a while because the AI is buggy or too dump, but because there is some timeout or token restriction for you as a end user. Very easy to remove. We know for a fact that military spending isn't bound to bean-counting. If they have to spend 20.000 or 200.000 or even 2 million dollars for compute per seat to get desired results they will do it. It's also safe to assume that if they want they will get previews of coming models, currently still in safety testing for consumers, just like internal testers or even some selected corporations and influencers get them earlier. OP was writing 1-2 generations ahead of consumer model, not internal development/testing.
reddit AI Moral Status 1772390259.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o81wsdp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_o82uncp","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_o80yttn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_o81rnx6","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_o83gqjv","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]