Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These monkeys think everyone else is ignorant and doesn't understand how AI work…
ytr_Ugzbc4Yyr…
G
The ai isn’t lying when it’s apologizing it’s just a highly probable output. Tha…
ytc_Ugx-Byhds…
G
If we take the "What Is It Like to Be a Bat?" definition of consciousness, then …
ytc_UgyZMUEYP…
G
It seems deepfake not because Biden was not blinking but because he was speaking…
ytc_UgwXH-Mx7…
G
PS: We have NOT solved the problem of immortality. AI tro s not smarter than hum…
ytr_UgyG8nxxx…
G
I don't think the ai itself is racist lmao it's just operating with and creating…
ytc_Ugx_e4qmX…
G
If autonomous is the future, the question doesn't become how to protect trucking…
ytc_UgyAElhuk…
G
If AI ruled the world it will do a better job then the people in power at the mo…
ytc_Ugz8xDVGB…
Comment
Geoffrey Hinton may have impressive credentials, but in my view, some of his recent statements come across as exaggerated and unconvincing. It seems to me that he may overstate the capabilities of large language models (LLMs), perhaps amplifying the perceived impact of a technology he helped develop.
He has repeatedly suggested that LLMs could eventually reach some form of consciousness. He has also criticized Noam Chomsky’s work in linguistics, despite not being a linguist himself. In addition, he often makes dramatic and even apocalyptic predictions about the future of AI.
LLMs are undoubtedly a groundbreaking technology. However, at their core, they are highly sophisticated text-prediction systems trained on vast amounts of data. They do not possess agency, intentions, or intrinsic purpose. If AI were ever to achieve true sentience, it is unlikely—at least in my view—that it would emerge from the current LLM-based architecture.
youtube
AI Moral Status
2026-02-28T17:3…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwoy5uzPALVwaCEnCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgySlXL1kgdhyTVK3C94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSkjS6d2rIQ_gR8wl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxikyas9WNEjKWfEdR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx00JLOWSZdXt3MXaJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymJ54IUr4vzWdK-ER4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOkO-nWpObXU0DvxZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7te4rd36a4NAyAuZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwnBVXWUj6J3jjwqy54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyTSdP4agTAWhzF0UN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]