Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why hire a shitty engineer, when you can have AI slop?
Seriously, be a good engi…
ytc_UgyBY6kzi…
G
I could see using AI maybe to scan shit even to suggest matches but to think tha…
ytc_Ugwj3hkac…
G
True. He even had "AI" in his bio. No wrong deeds were done, people are just pro…
ytr_UgxrM6fpG…
G
I’m not worried about AI or robots I can’t even get my spellcheck to spell words…
ytc_Ugy2C-Guo…
G
Point should be. Don't need self driving cars. If you phucks don't wanna drive u…
ytc_UgxxqWG1w…
G
Sorry bud but it's not ai it's a chat bot real ai will never just let you know i…
ytc_Ugz-F2tZY…
G
Ai can evaluate psychological conditions which could be used to find police unfi…
rdc_mzku0xr
G
I pretty much agree with you, but if people are able to decide for themselves th…
rdc_c33u5gh
Comment
Interestingly from a comsci pov, the LLM being able to verbatim spit out paragraphs tells me that on the technical side, they overfitted their model and it just memorised and never made connections to abstract ideas of the articles when training.
OpenAI's "prompt hacking" defense is a bit of a PR spin. While it's true the NYT had to use specific prompts to "induce" the recall, the fact that the data is stored in a way that can be recalled verbatim is, by definition, a failure of the model to fully abstract the information.
Which means the model isn't "piecing bits and pieces here and there" based on the prompt, but just figured out that "this prompt" should output this exact paragraph for maximum reward.
That tells me also that they didn't just go to NYT website and parse the article once, but they went through many many MANY different sources, like web crawlers and archive sites and shared articles on social media, and did not filter out duplicates. So the model was repeatedly exposed to the same article over and over and started memorising
It also means that openai did not have access to much training data, but threw in a lot of money on hardware to make the LLM huge. So instead of breaking the input and memorising abstract ideas, it had too many layers and started memorising instead (like a student with photographic memory won't study and understand the concepts of "multiplication" but just memorise pages and pages of multiplication table)
youtube
AI Responsibility
2026-04-11T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw679e2QgZFrF-dWYd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxfndZJWYaFOu1rs2N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxfuZBgppz2VIiTc4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyD5tP27ZkcIRq1v7l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUGxckbTE7FQlhBr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxU6H6Z9-DofUGXpkR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwh13APZyjDXASgMf54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZD8m1wTIqd_qIsF54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyjyJR7HPuHuIodk5x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzwN2Jy13y1qDwIkJ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]