Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sooo we’re not gonna talk about how dismissive he is towards Sofia as opposed to…
ytc_UgyFEKBT_…
G
This is not about artificial intelligence! It is about the vageries of the human…
ytc_Ugz2Wyk8H…
G
@MrMichiel1983if you think existential risk from a.i. is 200 years away you're …
ytr_UgwHqeUcF…
G
I would even argue that the kinds of people that continue to use AI art (in spit…
ytc_UgyH8jOqs…
G
Except that, to make a good picture with AI, you steal the style of other artist…
ytr_UgxBQjTdh…
G
The birds, the animals, and even the fish in the sea dont need Ai smartphone to …
ytc_UgxUuPgT9…
G
Nah, don't worry about this. ChatGPT doesn't even know what your name is in a co…
ytc_UgyzrtMc4…
G
I think making or building weapons is itself a dangerous thing and including AI…
ytc_UgyBgsq51…
Comment
It can't recall the whole times article from memory. AI models don't have that level of memory recall. OpenAI's ChatGPT is ~1T parameters (estimated), across ALL PARTS OF THE INTERNET. That means an individual times article makes up a very small (if any) part of the whole parameter space. It's like compressing the meaning of the whole internet to a few terabytes and trying to recall a specific article. It will try to write a times-style article, but will eventually diverge the same way a human cannot remember an entire times article without training to do that specific task. Even still, ChatGPT has built-in copyright safeguards, and to try to bypass that is attempted jailbreaking, which goes against OpenAI's ToS. Generative AI is just that: it generates content. It learns from the internet to understand how human language works, then is trained to create coherent messages based on a prompt given to it. This is akin to a human baby learning to talk, but also remembering certain phrases from their environment verbatim.
youtube
AI Responsibility
2026-04-11T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwAqpAqIRbdOAfNvpJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzukaGNMhrCBSOnZH14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy4nf5-kXagdaZTYXB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuB2kOqqKk2ftCwAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvS-BxchBX1RdHqrF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAMQKkOdp-RdCxUWd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgziYRNo3eYs-i2W-Pp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwD_MhGljw1Q34faql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7wMNOowhCEfDNmPl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxdxlpubZhWhdys_V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]