Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Fascinating project! Would you do a similar exercise if you get access to the version of GPT-4 with a 32,000 token context window, which I think some people already have access to? You would need to use the API or the OpenAI Playground, but that offers advantages like control over temperature and other parameters. It would be interesting to see how you’d adjust your process to take advantage of the huge context window. Maybe you could even reach out to OpenAI describing your project and ask for access to the 32K token version! ———- Thinking of the criticism some others on here have made of the quality, we have to remember: You and ChatGPT produced this in ten days. A professional author might iterate on a book for an entire year or more. Also, most books produced by humans aren’t very good either. Books that are published and widely distributed represent a tiny cherry-picked fraction of all books written by humans.
reddit AI Responsibility 1679687319.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jdix07x","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_jdizhfe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_jdktdfc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_jdlkrrs","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_jdj7cax","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]