Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a firmware engineer of 18 years who has studied AI (in general and neural networks specifically) I'd argue it is intelligent... which doesn't mean it is infallible. Modern LLM's are taught information, which is distilled down into associations within the neural network. This is very similar to how the human brain stores and accesses information. It's not a difference in kind, it's a difference in scale and resolution. However, something few people know is that LLM's like ChatGPT are made of MULTIPLE neural networks, and the weakest point right now is the text encoder, or tokenizer... the part that "understands" your prompt. When prompting LLM's with more... high fidelity data the results are MUCH better... SHOCKINGLY better. This is how other neural networks can be better at cancer diagnoses than human oncologists... because the "prompt" isn't human language, it's an image (a CT scan, or whatever type of scan it is they use for finding tumors). Human language is ambiguous, ambiguity leads to misunderstandings. There are 101 ways to ask the same question, for example someone doing some remodeling might ask "What is the size of a door" and get a completely wrong answer while someone else asks "What are common dimensions for residential door frames in the United States" and get the correct answer every single time.
youtube AI Moral Status 2025-10-31T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwi2gFHLIt1_avnR6R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyAGCwcTLJCr9-3OcN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzgCXXvnF0P3RvL5tZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwY5TgvXnR4RuWpW7R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgytuTwLw3PhBCkJ4jp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxB4afufiBBdaK8Y1Z4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxRBz5uOmIxDeSKKgt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxqsReF4-rEzfznCdB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzOBLcNhvJRT5oFUiR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw20fzZVJjlgE_hx-p4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"} ]