Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
…yeah using LLMs to write *book reviews* is hilariously bad. Hope that person (fired) had everything they published lately retracted. And check everyone else for AI use too. Regardless of whatever your thoughts even are on LLMs the entire point of a book review is *that you read the book*. And can *usefully* and critically comment on that in a critical and somewhat novel fashion. Having an LLM either just hallucinate an article out of thin air, or force feed that entire book into an LLM (GLHF with memory and context windows), are both going to produce utter garbage, and is massively disrespectful (and potentially harmful) to authors, period. Like whatever your thoughts even are on LLMs this is one of the last things you should be using them for period. LLMs do fuzzy stochastic *interpolation* (and extrapolation) on shit in their training data. Let me repeat that, they do *fuzzy stochastic interpolation/extrapolation from shit in their training data*.  Is a NEW book in that training data? No. No it f—-ing is not. Nor mind you would any even *hyper intelligent LLM* be able to particularly well recall ANY book in question in good (and accurate) detail. And nor mind you even fully remember all details of a book you told it to read / summarize due to context window limits, and, on most LLM models, *memory compression*. Etc If you want to do that anyways and have an LLM (badly) summarize a book for you, just feed one through it yourself. People are not however PAYING NYT subs (and/or browsing the review section for interesting things to read), for this.
reddit AI Jobs 1774981218.0 ♥ 8
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_odjq3vi","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_odx27g5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_odhy562","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_odjf8hh","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_odhq5wn","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}]