Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Back in 2010, I was working at a Sprint call center and they shut the entire cal…
ytc_UgzGKsQ_j…
G
The problem isn't that ai can generate something 👀, the problem is the moment yo…
ytc_UgybfBCSm…
G
😂😂 “SuperARC: An Agnostic Test for Narrow, General, and Super Intelligence Based…
ytc_Ugzs40URj…
G
Coding with AI is to traditional coding as science-based lifting is to tradition…
ytc_Ugx06hmWN…
G
That's not really how it usually works when jobs get automated, the people repla…
ytc_Ugxe2wEV-…
G
Start with labelling all the AI shit on Youtube originating (or at least claimin…
rdc_o19ltv5
G
If AI reaches a higher level in mathematics, it doesn't mean it will understand …
ytc_UgyhWnW2B…
G
I’m curious what model of ChatGPT was used (3.5 or 4). Also, with the plugin abi…
ytc_UgyEXRlX4…
Comment
Fascinating project!
Would you do a similar exercise if you get access to the version of GPT-4 with a 32,000 token context window, which I think some people already have access to? You would need to use the API or the OpenAI Playground, but that offers advantages like control over temperature and other parameters. It would be interesting to see how you’d adjust your process to take advantage of the huge context window.
Maybe you could even reach out to OpenAI describing your project and ask for access to the 32K token version!
———-
Thinking of the criticism some others on here have made of the quality, we have to remember:
You and ChatGPT produced this in ten days. A professional author might iterate on a book for an entire year or more.
Also, most books produced by humans aren’t very good either. Books that are published and widely distributed represent a tiny cherry-picked fraction of all books written by humans.
reddit
AI Responsibility
1679687319.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jdix07x","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_jdizhfe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_jdktdfc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_jdlkrrs","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_jdj7cax","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]