Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interestingly from a comsci pov, the LLM being able to verbatim spit out paragraphs tells me that on the technical side, they overfitted their model and it just memorised and never made connections to abstract ideas of the articles when training. OpenAI's "prompt hacking" defense is a bit of a PR spin. While it's true the NYT had to use specific prompts to "induce" the recall, the fact that the data is stored in a way that can be recalled verbatim is, by definition, a failure of the model to fully abstract the information. Which means the model isn't "piecing bits and pieces here and there" based on the prompt, but just figured out that "this prompt" should output this exact paragraph for maximum reward. That tells me also that they didn't just go to NYT website and parse the article once, but they went through many many MANY different sources, like web crawlers and archive sites and shared articles on social media, and did not filter out duplicates. So the model was repeatedly exposed to the same article over and over and started memorising It also means that openai did not have access to much training data, but threw in a lot of money on hardware to make the LLM huge. So instead of breaking the input and memorising abstract ideas, it had too many layers and started memorising instead (like a student with photographic memory won't study and understand the concepts of "multiplication" but just memorise pages and pages of multiplication table)
youtube AI Responsibility 2026-04-11T22:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw679e2QgZFrF-dWYd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxfndZJWYaFOu1rs2N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxfuZBgppz2VIiTc4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyD5tP27ZkcIRq1v7l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUGxckbTE7FQlhBr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxU6H6Z9-DofUGXpkR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwh13APZyjDXASgMf54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZD8m1wTIqd_qIsF54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyjyJR7HPuHuIodk5x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzwN2Jy13y1qDwIkJ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]