Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs are flat files. If you open one up in notepad, you will see that it is a file filled with vector matrixes. They act like a database. You send in a query, and it uses vector math to derive an answer and sends that back out to you. That is not an LLM being conscious any more than a database is conscious when you send in a select * from Table1, and it sends you back the results. The computer is not thinking. It is processing. Those are two very different things. But as a materialist, Hinton chooses not to see that distinction, and so he is inadvertently projecting consciousness onto AI where it does not belong. LLMs are files. Interacting with those files are algorithmic programs, written typically in python, but can be any number of languages. Agents are simply algorithmic programs that recursively ask the LLM for answers, and based on the results, which are fed back to the LLM, it derives answers, and can then take algorithmic actions. Nowhere in that process is there consciousness. I think Hinton is quite wrong on this point. But whether or not AI (LLMs + Algorithmic Programs) have consciousness is not all that important, as they can still act as though they do. So like a TV that shows a person, and that person is in no way real, an AI can seem to be acting or speaking like a person, but is in no way an actual person... but it doesn't matter because it still behaves as if it were. The only case in which it is important whether or not AI is conscious is when business owners, lawyers and politicians begin discussing AI Rights, and premise those rights on Hintonian claims that the AI is indeed "Conscious". Non-Conscious AI may receive all kinds of Rights that have no basis because the AI is actually not Conscious. And with those Rights they may acquire a great deal more power in society than they already have, or deserve.
youtube AI Governance 2025-06-27T17:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzEEBY9x6A1F_cLWn14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxJ_pp-xGvYlfX-WrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxmhppjk1Np3mWR-8d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyg-7HpAtnSNsdxiRV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxfrhaejVplBgUIaG94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxhL36PqgSRBN6RZKt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgybZisRLA8s38jeWmt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxD-5ZHAtUWw85WWRR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwfyeEt4DOMeyNtvql4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQRuh1K8rno5H-Dwp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"approval"} ]