Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yea i believe but im not 100% samdoesart or some other artist i saw talked about…
ytr_Ugz7aaoJq…
G
I saw a video game dev log where the developer used AI to preview his work in di…
ytc_UgzZDY9VY…
G
„You fired the first shot with that story which made your audience mass cyber bu…
ytr_UgzFxTWH4…
G
If A.I took over I could bet money they'd dispose of us more humanely than most…
ytc_Ugw1R4fkt…
G
I was happy with deepseek killing off the stocks in just a day
but dreaded as de…
ytc_Ugw0Eqz2b…
G
I recently went back on Deviantart and the whole thing has AI, EVERYWHERE. I cou…
ytc_Ugyjv6qKC…
G
But one thing no one addresses, if AI replaces all these jobs, the there will be…
ytc_UgwEVl0KV…
G
This is one of the most informative interview I have ever watch not just on Star…
ytc_UgwksWU7Y…
Comment
The reason that the context window is limited is because it would be computationally expensive to make it bigger.
Transformers, the neural network architecture that modern LLMs are based on, rely on a mechanism called "attention," which is a dynamically learned function relating the importance of each token to each other token in a session. Since it maps every token to every other token, if you want to have a context window N tokens long, the network needs to learn a matrix of size NxN. (And on top of that, there needs to be multiple attention heads, because words in a text stream relate to each other in more than one way.)
Of course, that's not to say it's impossible to increase this context window or that it won't ever happen, just that it's hard to do so.
I can only imagine that the OpenAI devs are carefully weighing each word in the system prompt, since it detracts from the user's share of this limited resource. There must be something terribly wrong with Seaborn—I wonder what that is!
reddit
AI Responsibility
1720290333.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_lbxfc4y","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb11zrc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb18txj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb2mlm2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_lb3y6wl","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}]