Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean, yeah to some extent, it’s basically just custom ~/.claude/skills and custom tools, but on the massive amounts of data piece, the one thing we know for a fact that they did not do, in increase context size , or solve the context window problem generally. And that is a large constraint on that massive data and inputs piece. I’m sure they used every trick in the book , and made some new ones, but that book is basically just “different names for RAG” plus the various KV cache tricks over the last 12 months. Off topic but deepseek just co-authored a really interesting KV Cache paper last week, first one that’s not just a party trick, in that it’s not external (and this is separate from engram paper, obviously)
reddit AI Moral Status 1772414553.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_o80sl3i","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_o80wmxj","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"rdc_o856w4p","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_o85k4c4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_o8ck1ac","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]