Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI systems based on large language models are fundamentally probabilistic pattern-matching machines. At a technical level, they estimate the conditional probability of the next token given prior context. While this produces impressive fluency, it does not amount to understanding, reasoning, or agency in the human sense. Much of the public narrative suggesting otherwise is driven by marketing incentives rather than empirical evidence. Despite rapid progress, current models remain constrained by their training data and architectural limits. They exhibit well-documented failure modes such as hallucinations, brittleness under distributional shift, and limited causal reasoning. Scaling model size and data has delivered diminishing returns on many tasks, suggesting that performance gains are becoming increasingly costly and incremental rather than transformative. From an economic standpoint, many companies deploying large language models have yet to demonstrate consistent return on investment. High inference costs, reliability issues, and the need for extensive human oversight often erode the expected productivity gains. Taken together, these factors indicate that large language models may be approaching a practical equilibrium rather than an imminent leap toward general intelligence. As a result, existential “AI doom” narratives appear overstated. What we are observing is a powerful but narrow technology navigating its real-world limits, amplified by hype cycles that favor spectacle over sober technical assessment. References : Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623. Brown, T.B. et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, pp. 1877–1901. Hendrycks, D. et al. (2021). Measuring massive multitask language understanding. International Conference on Learning Representations (ICLR). Ji, Z. et al. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 56(1), pp. 1–38. Sevilla, J., Zoph, B. and Dean, J. (2022). Compute trends across three eras of machine learning. arXiv:2202.05924. Vaswani, A. et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, pp. 5998–6008.
youtube AI Moral Status 2025-12-19T00:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzkAYXpGhUzq3nU-3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwsZSz76FPQ_S_w_GB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgyufBdOXsWuBJhnDzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwPcAV06bpA-sclHa14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgwAgGXvEaaT4YqI7mF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzarmPOYiSF7P03C-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyuOwaPaFub47DO1Wt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyzeGhWK-Gl1ZWBD-N4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx09N09SovOmKxOakB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzK7UpUYB9eVAJtn594AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"]}