Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The reason that the context window is limited is because it would be computationally expensive to make it bigger. Transformers, the neural network architecture that modern LLMs are based on, rely on a mechanism called "attention," which is a dynamically learned function relating the importance of each token to each other token in a session. Since it maps every token to every other token, if you want to have a context window N tokens long, the network needs to learn a matrix of size NxN. (And on top of that, there needs to be multiple attention heads, because words in a text stream relate to each other in more than one way.) Of course, that's not to say it's impossible to increase this context window or that it won't ever happen, just that it's hard to do so. I can only imagine that the OpenAI devs are carefully weighing each word in the system prompt, since it detracts from the user's share of this limited resource. There must be something terribly wrong with Seaborn—I wonder what that is!
reddit AI Responsibility 1720290333.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_lbxfc4y","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb11zrc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb18txj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb2mlm2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_lb3y6wl","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}]