Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am a retired programmer, but I love to dabble, and I have been using a very sm…
ytc_Ugxc-FD-S…
G
Meh, I get scammed $10k from AI video and ai voice andit's only the beginning. W…
ytc_Ugz771wic…
G
We need protection for workers and to punish AI for all the creations stolen for…
ytc_UgxS7IaHd…
G
I think it is a simple matter of not programming "things" to need rights. So for…
ytc_Uggf3fSG7…
G
The first five minutes did an infinitely better job describing the potential dan…
ytc_UgxoTkFV7…
G
yeah, emp would mess with an AI country as bad as a nuke would but without harmi…
ytr_UgzyqXBQ-…
G
There is lot of hype on AI, bubble will burst and AI will find a niche. No its n…
ytc_Ugw2uZKGe…
G
As an ai developer, your type of youtuber is first who will loose thier job and …
ytc_Ugyb3gS8Y…
Comment
…yeah using LLMs to write *book reviews* is hilariously bad. Hope that person (fired) had everything they published lately retracted. And check everyone else for AI use too.
Regardless of whatever your thoughts even are on LLMs the entire point of a book review is *that you read the book*. And can *usefully* and critically comment on that in a critical and somewhat novel fashion.
Having an LLM either just hallucinate an article out of thin air, or force feed that entire book into an LLM (GLHF with memory and context windows), are both going to produce utter garbage, and is massively disrespectful (and potentially harmful) to authors, period.
Like whatever your thoughts even are on LLMs this is one of the last things you should be using them for period.
LLMs do fuzzy stochastic *interpolation* (and extrapolation) on shit in their training data. Let me repeat that, they do *fuzzy stochastic interpolation/extrapolation from shit in their training data*.
Is a NEW book in that training data? No. No it f—-ing is not. Nor mind you would any even *hyper intelligent LLM* be able to particularly well recall ANY book in question in good (and accurate) detail. And nor mind you even fully remember all details of a book you told it to read / summarize due to context window limits, and, on most LLM models, *memory compression*. Etc
If you want to do that anyways and have an LLM (badly) summarize a book for you, just feed one through it yourself.
People are not however PAYING NYT subs (and/or browsing the review section for interesting things to read), for this.
reddit
AI Jobs
1774981218.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_odjq3vi","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_odx27g5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_odhy562","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_odjf8hh","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_odhq5wn","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}]