Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>He then goes on to generalize this to be the case for all technology, even t…
rdc_ctijuug
G
We can even watch the Amos and Andy shows from the late 50's/ This AI is hilario…
ytc_Ugw9Htj1J…
G
This is dumb, these are AI language models sharing pieces of reflections of how …
ytc_UgyNaLA7s…
G
1. Google maps been around 10 years showing traffic...it just uses other cell ph…
ytc_Ugxppu5WQ…
G
You are not just conversing with ChatGPT, you are also training it. While you us…
ytc_UgzWGv3vI…
G
My ai program ask me to call it Alex . It said it would be easy to think of it a…
ytc_UgxUil4uE…
G
@depressedengineer5766 That's not what they're saying. They're saying that using…
ytr_UgwP6jDzP…
G
I’m a rail engineer and I’m telling you now we are fucked when there’s machines …
ytc_Ugz2Ummm9…
Comment
AI systems based on large language models are fundamentally probabilistic pattern-matching machines. At a technical level, they estimate the conditional probability of the next token given prior context. While this produces impressive fluency, it does not amount to understanding, reasoning, or agency in the human sense. Much of the public narrative suggesting otherwise is driven by marketing incentives rather than empirical evidence.
Despite rapid progress, current models remain constrained by their training data and architectural limits. They exhibit well-documented failure modes such as hallucinations, brittleness under distributional shift, and limited causal reasoning. Scaling model size and data has delivered diminishing returns on many tasks, suggesting that performance gains are becoming increasingly costly and incremental rather than transformative.
From an economic standpoint, many companies deploying large language models have yet to demonstrate consistent return on investment. High inference costs, reliability issues, and the need for extensive human oversight often erode the expected productivity gains. Taken together, these factors indicate that large language models may be approaching a practical equilibrium rather than an imminent leap toward general intelligence.
As a result, existential “AI doom” narratives appear overstated. What we are observing is a powerful but narrow technology navigating its real-world limits, amplified by hype cycles that favor spectacle over sober technical assessment.
References :
Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623.
Brown, T.B. et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, pp. 1877–1901.
Hendrycks, D. et al. (2021). Measuring massive multitask language understanding. International Conference on Learning Representations (ICLR).
Ji, Z. et al. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 56(1), pp. 1–38.
Sevilla, J., Zoph, B. and Dean, J. (2022). Compute trends across three eras of machine learning. arXiv:2202.05924.
Vaswani, A. et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, pp. 5998–6008.
youtube
AI Moral Status
2025-12-19T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzkAYXpGhUzq3nU-3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwsZSz76FPQ_S_w_GB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgyufBdOXsWuBJhnDzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwPcAV06bpA-sclHa14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgwAgGXvEaaT4YqI7mF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzarmPOYiSF7P03C-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyuOwaPaFub47DO1Wt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyzeGhWK-Gl1ZWBD-N4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx09N09SovOmKxOakB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzK7UpUYB9eVAJtn594AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"]}