Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> none of them can even do full a page without starting to screw up the plotline All of the big models can do "a full page" without losing context. They can go much further than one page. It's true, at present, long horizon tasks like writing a full novel isn't going to work well. But a single page? That's easy. I mean, really easy. As proof, here's a complete short story, mostly written by the model, guided by my prompts. https://chatgpt.com/share/67466fcc-01c4-800e-8a28-347c59fc6eb1 It's completely cohesive. 2585 words. A novel typically has between 200 and 300 words on a page. Let's go with 250. That's 10 pages and some change. The story could use editing, but it's an *excellent* first draft. The context buffer of GPT-4-turbo used for that persona is around 24000 words. Minus the persona instructions (which are pretty long), and there's still room for around 90 pages. In practice, the model will start losing attention and forgetting stuff sooner than it hits its context max. But even still, 10 pages > 1 page. I don't know what AI creative writing software you used, but it sucked compared to a typical LLM.
reddit AI Jobs 1732669793.0 ♥ -3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_lz5p3iw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_lz65g5n","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_lz7mqxd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_lz5gvkh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_lz5hit5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}]