Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm hijacking your comment to immediately disarm and refute this claim (and honestly conspiracy theory) We're in an AI arms race right now. Honestly if you ask me we release models *too quickly* without the care and safety measured we should employ. We train models as quickly as possible, pray that it passes all automated safety checks and benchmarks and then release it to the public. Usually the time from the training checkpoint being finished to release to the public is 3-6 weeks time. I *wish* we had the luxury of having 2 generations of models held back to do all the safety and alignment tests on before releasing them to the public. Training an AI model takes a lot of compute, capital and time and we don't have the luxury to just train a bunch of them, hold them back and release well-tested models to the public. This is how the current pipeline is at frontier labs: 1) We write papers about potential new techniques we could apply to AI 2) We do small-scale experiments on some of the papers to see if it works 3) On the experiments that were successful we do scaling tests where we scale up the experiments to see if it holds true on the bigger scales 4) We combine multiple successful experiments and roll it into the next big LLM together with a bigger compute budget and more refined datasets to bring the next jump in capabilities There is no "holding back" or leverage here. In fact we don't even have the compute to do all the experiments we want, we're highly bottlenecked by the amout of compute in existence right now. This is also why there won't be a bubble pop. We have *so much more* we could throw at these models to improve them as we have a backlog of new techniques we haven't even properly tested at scale yet because we simply don't have the compute and time to test them and integrate them all. But this weird conspiracy theory that we're somehow holding back SOTA models from the general public is extremely weird and in fact the *opposite* is happening.
reddit AI Moral Status 1772356705.0 ♥ 84
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o80okli","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_o8120wk","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_o8168xb","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_o80rh1p","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_o80ytmz","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]