Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This question comes up a lot on the Very Bad Wizards podcast and I think they ha…
rdc_cxntmbt
G
The thing with ai is, we need a human in the loop. I use ai to work, and i do th…
ytc_UgyBGKUih…
G
No it wont. AI can write better stories & give more accurate medical diagnosis. …
ytr_Ugyx_Q_e3…
G
Oh thank goodness. Maybe now I won't have to spend a half hour spelling my name …
ytc_Ugz_sEadx…
G
I mean I’m joking here but what happens when two AI systems turn on one another …
ytc_UgwSyiFMm…
G
The new conspiracy theory is the governments using the pandemic as an opportunit…
rdc_g146281
G
I wonder, if someone posted a deepfake of Jill Biden and Obama old Joe just mig…
ytc_UgzVEzMjv…
G
Again with these claims. These programs are not tools, and writing a list of sea…
ytr_Ugw9Kd4xZ…
Comment
Right now all you are looking at is the core LLM model starting to plateau. And I agree that GPT-6 probably won't feel that much smarter than 4 and 5 for everyday questions. The thing is, they already know almost everything there is to know about basic stuff, so how much better can they get?
The part where I think you are wrong is that this new tool is just a foundation to many more innovations that can happen on top of it. Sure, we don't know what those will be yet. But it's a bit silly and in denial of the usual pattern of history if you think a new technology like this won't disrupt more things in the future as humans continue to innovate on top of it.
Did we expect iphones and the internet when we got the first PC?
The current form of ChatGPT itself is probably going to look extremely primitive in 5 years. There could be all sorts of multi-modal, long running, LLM based agents that we interact with across both our local machines and the internet. Things that can perfectly remember the context of everything you are doing and armed with an encyclopedia of human knowledge. You don't think that could be more powerful?
I listened to Sam Altman on a podcast about GPT-5. He says two things which I believe are true:
\- Having LLMs generate ideas and then work with humans to run experiments could work. These models if fed all the world's biomedical data could contain encodings that connect things no human has realized yet. And it could lead to acceleration in learning about how to cure disease
\- A higher percentage of software development will become dictated to an LLM, versus done by hand. It may get good enough to become a preferred default coding style for many people
reddit
AI Jobs
1754753680.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_n7sgvfj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_n7t0x2o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_n7ttv31","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_n7ty6dt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_n7tz9od","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"})