Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What happens when the model is presented with far more data than it is able to memorize? How does it get better at predicting the next token? Do you know? The world's leading experts know. They would tell you that LLMs compress their training data. It is through this process that they learn deeper structures and higher order concepts. What does it mean to understand something, if not to have an accurate, compressed, internal model? Why is it that LLMs are able to generalize outside of their training distribution to some degree? If they were stochastic parrots, they wouldn't be able to do that at all. How is it that LLMs are able to appropriately digest, analyze, and synthesize many pages of brand new text it has never been trained on? How can they explain a provably novel joke? How can they get such high scores on reasoning tests with held-out test sets? Why does the level of capability in one area predict the level of capability in another to some degree, such that there are generally stupid and generally smart models? If you want to know how we directly empirically know that LLMs contain abstract models of the world, see "Othello-GPT" and "Language Models Represent Space and Time."
youtube AI Moral Status 2025-10-30T22:3… ♥ 7
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgxrOmrY6YDE805fC7l4AaABAg.AOv6Gc5SKRnARWm76tDG-E","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugwq1MXS9SijWsH12dR4AaABAg.AOv6EliHjBGAOvKpYZYcw8","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgymzLCIyUfJaVbCJ1h4AaABAg.AOv5v7OY586AOv8pfUR35x","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugxji0AkAMbhhb3hnvB4AaABAg.AOv5cYuQRGiAOv6Btglisb","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytr_Ugxji0AkAMbhhb3hnvB4AaABAg.AOv5cYuQRGiAOvK3Ysdtt8","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugxji0AkAMbhhb3hnvB4AaABAg.AOv5cYuQRGiAOvO20G7Qnb","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytr_Ugxji0AkAMbhhb3hnvB4AaABAg.AOv5cYuQRGiAOwEazsRprc","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwrzOutMXtnw3_hHAx4AaABAg.AOv47v_cMnvAOv6SypTxJq","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwrzOutMXtnw3_hHAx4AaABAg.AOv47v_cMnvAOwon9QDg-G","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgwrzOutMXtnw3_hHAx4AaABAg.AOv47v_cMnvAOxIElKFGxk","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]