Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Conclusion : plus il y a de l’argent en jeu, plus il y a de l’incompétence, priv…
ytc_UgzKXVFrv…
G
🔥 Wow, this is next level! 😲 I actually tried creating something similar with AI…
ytc_Ugx8T4zPF…
G
Well its kind of strange to leave your comment section enabled, post public vide…
ytc_UgzPSWBw-…
G
I would never say something like this about a real piece of art, but shad's ai g…
ytc_UgwpwbJXl…
G
This is the most concise, clearest presentation of AI and its likely future I ha…
ytc_UgyHg-R9M…
G
as a real artist looking at these posts by this ai "artist" genuinely makes me m…
ytc_Ugx7SzCsc…
G
If the chats were not saved on openAI's end, they will not likely win a lawsuit.…
ytc_Ugw5gttCg…
G
I work for a large AI company. Do not trust your life to technology. I think it …
ytc_UgyLvFxH2…
Comment
What happens when the model is presented with far more data than it is able to memorize? How does it get better at predicting the next token? Do you know?
The world's leading experts know. They would tell you that LLMs compress their training data. It is through this process that they learn deeper structures and higher order concepts. What does it mean to understand something, if not to have an accurate, compressed, internal model?
Why is it that LLMs are able to generalize outside of their training distribution to some degree? If they were stochastic parrots, they wouldn't be able to do that at all.
How is it that LLMs are able to appropriately digest, analyze, and synthesize many pages of brand new text it has never been trained on? How can they explain a provably novel joke? How can they get such high scores on reasoning tests with held-out test sets? Why does the level of capability in one area predict the level of capability in another to some degree, such that there are generally stupid and generally smart models?
If you want to know how we directly empirically know that LLMs contain abstract models of the world, see "Othello-GPT" and "Language Models Represent Space and Time."
youtube
AI Moral Status
2025-10-30T22:3…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxrOmrY6YDE805fC7l4AaABAg.AOv6Gc5SKRnARWm76tDG-E","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugwq1MXS9SijWsH12dR4AaABAg.AOv6EliHjBGAOvKpYZYcw8","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgymzLCIyUfJaVbCJ1h4AaABAg.AOv5v7OY586AOv8pfUR35x","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugxji0AkAMbhhb3hnvB4AaABAg.AOv5cYuQRGiAOv6Btglisb","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytr_Ugxji0AkAMbhhb3hnvB4AaABAg.AOv5cYuQRGiAOvK3Ysdtt8","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugxji0AkAMbhhb3hnvB4AaABAg.AOv5cYuQRGiAOvO20G7Qnb","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytr_Ugxji0AkAMbhhb3hnvB4AaABAg.AOv5cYuQRGiAOwEazsRprc","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwrzOutMXtnw3_hHAx4AaABAg.AOv47v_cMnvAOv6SypTxJq","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwrzOutMXtnw3_hHAx4AaABAg.AOv47v_cMnvAOwon9QDg-G","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwrzOutMXtnw3_hHAx4AaABAg.AOv47v_cMnvAOxIElKFGxk","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]