Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Chinese Room thought experiment is effective because it sneaks in a hidden assumption and exploits it. Obviously the room annually understands. If the book was an actual book, or set of lookup tables, it would be larger than Earth. In order for the book to actually be able to carry on an arbitrary conversation in Chinese (or any language), it has to be a neural network. Whether it is conscious is a completely separate question that has nothing to do with the concept of "understanding." The symbol grounding problem has been solved. If you take every concept and relate it to every other concept with very large vectors, it turns out you can encode the actual meanings of words. The world's leading experts have no problem saying that LLMs reason and understand, because they have the technical background. They know how LLMs are trained, and how compression of the training data results in independently rediscovering higher-level patterns and abstractions that allow them to generalize out of distribution. To directly answer your question: Yes, it is an empirical fact that text models are world models. See "Othello-GPT" and "Language Models Represent Space and Time" for early text-only examples.
youtube AI Moral Status 2025-10-30T22:2… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugwn1UHxRPEWCR7_9eF4AaABAg.AOv7pWZq5QkAOvkJTayCuD","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwn1UHxRPEWCR7_9eF4AaABAg.AOv7pWZq5QkAOvoHZb3ZVa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgydX9KsXkvPOd_CNVt4AaABAg.AOv7fyOrVw-AOv_fy9Rf7H","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxgMNFsGxAFmiEM5tx4AaABAg.AOv7exHvnT1AOv_I3KavVr","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz1e5I1tkoZ41iRjf14AaABAg.AOv7Zlb2_pgAOvD4vEejDo","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwXGNnOa3vEPzKtm814AaABAg.AOv7I-ncmw0AOv7SIfa5Dd","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwXGNnOa3vEPzKtm814AaABAg.AOv7I-ncmw0AOvAvbbVBCm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwXGNnOa3vEPzKtm814AaABAg.AOv7I-ncmw0AOxS6sKZjIe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgytcW8C913YvoOJuI94AaABAg.AOv78JclSN4AOw7tnJIe-p","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxKoz5zpyRS9uGUv9l4AaABAg.AOv71aZLtrDAOvJ9nbl53N","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]