Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Or you're being too narrow with your definition of intelligence... Intelligence is merely the ability to form meaningful associations between separate bits of information. It requires, as prerequisites, the ability to store and recall those bits of information as well. Neural networks do all of those things. As a firmware engineer of 18 years who has studied AI (in general and neural networks specifically) I'd argue it is intelligent... which doesn't mean it is infallible. Modern LLM's are taught information, which is distilled down into associations within the neural network. This is very similar to how the human brain stores and accesses information (on a high level, I have a friend who is a neuroscientist researcher so I know that the underlying mechanisms are very different, of course). It's not so much a difference in kind, it's a difference in scale and resolution. However, something few people know is that LLM's like ChatGPT are made of MULTIPLE neural networks, and the weakest point right now is the text encoder... or tokenizer... the part that "understands" your prompt. When prompting LLM's with more... high fidelity data the results are MUCH better... SHOCKINGLY better. This is how other neural networks can be better at cancer diagnoses than human oncologists... because the "prompt" isn't human language, it's an image (a CT scan, or whatever type of scan it is they use for finding tumors). Human language is ambiguous, ambiguity leads to misunderstandings. There 101 ways to ask the same question, for example someone doing some remodeling might ask "What is the size of a door" and get a completely wrong answer while someone else asks "What are the common dimensions for a residential door frame in the United States" and get the correct answer every single time.
youtube AI Moral Status 2025-10-31T05:4… ♥ 9
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugyu6z4Pp0svDkQdioV4AaABAg.AOvWlkghdIeAOwHPKKoVXh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzuZRURQSeeS-QzHsR4AaABAg.AOvWYTnzRcKAOwAgj_NNJj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxT7RhFToA3B5KS5el4AaABAg.AOvVT1lAWuUAOvX28fpa8B","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugxm7-V2cw080X9sQZx4AaABAg.AOvVHzWnHuTAOwJ3QLWO2U","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyCfMdD9BZ9eMYKsqd4AaABAg.AOvUflNyaVbAOvlKIM-c07","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyCfMdD9BZ9eMYKsqd4AaABAg.AOvUflNyaVbAOwEBayQVOR","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgyCfMdD9BZ9eMYKsqd4AaABAg.AOvUflNyaVbAOwFlVGWy1q","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw-S1nEvQFHU322zGt4AaABAg.AOvSWEDCLLeAOw5yAMjmCo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugw-S1nEvQFHU322zGt4AaABAg.AOvSWEDCLLeAOw9OWiySM3","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw-S1nEvQFHU322zGt4AaABAg.AOvSWEDCLLeAOwB90dnOe1","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]