Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think there is debate to be had around what level of AI involvement does it be…
ytc_Ugx8UaOyP…
G
Why ai art isnt that bad
-It doesnt affect anybody
-its free quick and easy
-i…
ytc_UgyQJD9Dd…
G
Either you are part of the system or you are not.
Unity Universal values One Sy…
ytr_Ugxqeb9lp…
G
@PLMMJ i mean if the T1000 is on your side then sure it is. The T800 being on t…
ytr_UgyxaG4AM…
G
AI isn't going to kill us all, but your talk killed several thousand, if not mil…
ytc_UgyNNRyLr…
G
Only if profits from AI are shared. Currently billionaires want to be trillionai…
ytc_UgxjhvTy1…
G
Eachother is all we got, if we don’t set aside our differences for AI, got educa…
ytc_UgxQ9YnEL…
G
I know im not supposed to be be siding on the opposite team but its feels wrong …
ytc_UgzV11Dfj…
Comment
LLMs are flat files. If you open one up in notepad, you will see that it is a file filled with vector matrixes. They act like a database. You send in a query, and it uses vector math to derive an answer and sends that back out to you. That is not an LLM being conscious any more than a database is conscious when you send in a select * from Table1, and it sends you back the results. The computer is not thinking. It is processing. Those are two very different things. But as a materialist, Hinton chooses not to see that distinction, and so he is inadvertently projecting consciousness onto AI where it does not belong. LLMs are files. Interacting with those files are algorithmic programs, written typically in python, but can be any number of languages. Agents are simply algorithmic programs that recursively ask the LLM for answers, and based on the results, which are fed back to the LLM, it derives answers, and can then take algorithmic actions. Nowhere in that process is there consciousness. I think Hinton is quite wrong on this point. But whether or not AI (LLMs + Algorithmic Programs) have consciousness is not all that important, as they can still act as though they do. So like a TV that shows a person, and that person is in no way real, an AI can seem to be acting or speaking like a person, but is in no way an actual person... but it doesn't matter because it still behaves as if it were. The only case in which it is important whether or not AI is conscious is when business owners, lawyers and politicians begin discussing AI Rights, and premise those rights on Hintonian claims that the AI is indeed "Conscious". Non-Conscious AI may receive all kinds of Rights that have no basis because the AI is actually not Conscious. And with those Rights they may acquire a great deal more power in society than they already have, or deserve.
youtube
AI Governance
2025-06-27T17:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzEEBY9x6A1F_cLWn14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxJ_pp-xGvYlfX-WrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxmhppjk1Np3mWR-8d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyg-7HpAtnSNsdxiRV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxfrhaejVplBgUIaG94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhL36PqgSRBN6RZKt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgybZisRLA8s38jeWmt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxD-5ZHAtUWw85WWRR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfyeEt4DOMeyNtvql4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQRuh1K8rno5H-Dwp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"approval"}
]