Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This isn’t funny
These idiots telling Ai will generate plenty of jobs 😂😂😂😂
The Y…
ytc_Ugx0yh-QY…
G
And AI is tell us what it is. Nice to hear calm verbiage. BUT,..... Would AI tel…
ytc_UgzXeWleq…
G
Google. Charles Rankin Fine Art, this is my one-of-a-kind not using Midjourney o…
ytr_UgzEBL96T…
G
I doubt, LLMs are like children. I use AI tools in my software dev job, it requi…
ytc_UgzFDzsbl…
G
If on July 2024 Altman (OPENAI) declared to World the end of Human digital data …
ytc_UgzssGT0_…
G
Okay so I'm happy there is a way for artists to fight back against AI, also omg …
ytc_Ugxfkz7pZ…
G
robots like this don't exist. and nor does AI. robots can barely walk let alone …
ytc_Ugy8fnxOy…
G
That's a funny thought! Sophia does have a unique charm about her. Just imagine …
ytr_UgxEoYHDb…
Comment
I'm hijacking your comment to immediately disarm and refute this claim (and honestly conspiracy theory)
We're in an AI arms race right now. Honestly if you ask me we release models *too quickly* without the care and safety measured we should employ. We train models as quickly as possible, pray that it passes all automated safety checks and benchmarks and then release it to the public. Usually the time from the training checkpoint being finished to release to the public is 3-6 weeks time.
I *wish* we had the luxury of having 2 generations of models held back to do all the safety and alignment tests on before releasing them to the public.
Training an AI model takes a lot of compute, capital and time and we don't have the luxury to just train a bunch of them, hold them back and release well-tested models to the public.
This is how the current pipeline is at frontier labs:
1) We write papers about potential new techniques we could apply to AI
2) We do small-scale experiments on some of the papers to see if it works
3) On the experiments that were successful we do scaling tests where we scale up the experiments to see if it holds true on the bigger scales
4) We combine multiple successful experiments and roll it into the next big LLM together with a bigger compute budget and more refined datasets to bring the next jump in capabilities
There is no "holding back" or leverage here. In fact we don't even have the compute to do all the experiments we want, we're highly bottlenecked by the amout of compute in existence right now. This is also why there won't be a bubble pop. We have *so much more* we could throw at these models to improve them as we have a backlog of new techniques we haven't even properly tested at scale yet because we simply don't have the compute and time to test them and integrate them all.
But this weird conspiracy theory that we're somehow holding back SOTA models from the general public is extremely weird and in fact the *opposite* is happening.
reddit
AI Moral Status
1772356705.0
♥ 84
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o80okli","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_o8120wk","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"rdc_o8168xb","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_o80rh1p","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_o80ytmz","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]