Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We're living in the AI SCI-FI world for real now where men are made in the image…
ytc_UgyvRkVwr…
G
Why cant governments establish laws that dictate a certain percentage of any occ…
ytc_UgwxG132v…
G
Worshipping AI is due to teach us more than we know we dont know, its definately…
ytc_Ugz8VvkVv…
G
Hate how this is constantly framed as a "both sides" thing. The answer is no, it…
ytc_UgwgzjVc9…
G
I think Artificial Intelligence doesn't mean its Intelligence is greater than re…
ytc_Ugw8ZWTb1…
G
If every video on oneyplays was gonna be deleted with the exception of like 4 vi…
ytc_UgxvCg6or…
G
The only time I find myself using ai art is to create the most bizarre abominati…
ytc_UgxahNW0P…
G
First off, I am a programmer, and I know a thing or two about AI. This video is …
ytc_Ughn_M9pN…
Comment
I really enjoyed this post, great work.
I've been playing around with something kind of like this, and I very much agree with your insights and reflections. I also found it super helpful to not think linearly, and I jumped right to that. This is also the way I happen to write myself, almost picking a paragraph to fill in at random, and then slowly filling in the over document. It really helps to let anything being written not just influence later created content, but let the model work backwards in time and also go back and edit old stuff that needs changing because of the new scene. As you say, this is how a human writes, or many of them anyway. You're always bouncing around editing all different parts, because everything effects everything.
One thing I've been playing with extending this even further, using the prompt chaining lookup stuff. The idea is also to think about how a human writer will stop writing and flipping back through what they've already read to look up details. That writer is refreshing tokens in their human mind's context window, loading in the most relevant details, previous conversations, whatever. GPT does this now in some business-boring use cases like using a chatbot to ask questions about long technical documents. LangChain or a plugin tells GPT that GPT say say, "Hey, I realize I need more information about X, and I've been told I have a way to get that information. I should stop writing and spawn a new process to find relevant sections from a larger document, or do a math calculation, or whatever, and then return to this spot with new information fresh in my memory." and then GPT can find some detail in the technical PDF documents, or do a Wolfram Alpha calculation, or whatever. Humans do this when writing (or reading for that matter). GPT should be able to do this and re-read parts of the novel to find anything it needs. So far I haven't had that much success as knowing when to do this. It's easy to know you need to stop and do math,
reddit
AI Responsibility
1679708155.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jdkaol8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_jdkjktx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_jdlf7aq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_jdlk2ap","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jdm2d68","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]