Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe we need to create a new economic system that leverages AI + robotics. If h…
ytc_UgyqPmP_L…
G
AI (actually Virtual Intelligence) doesn't understand the information its sythes…
ytc_Ugy2KruYD…
G
*did* you read it though? the book seems to mostly be a series of prompt-driven …
ytc_UgxAzGbKN…
G
Could either of them get job in AI?? NEVER. - Neither of these two guy are qual…
ytc_UgxoL4rYd…
G
Amazing video! Love the ending... impactful. You really align a lot with how i s…
ytc_UgzX0Q_aI…
G
I think hoping the world will stop deep fakes is just an unrealistic pipe dream.…
ytc_UgyGK9gsa…
G
We need to be taught how to have an enriching leisure life. Promote the learning…
ytc_Ugxlj9VQq…
G
Those vaccines were funded by western citizens tax money. They are making money …
rdc_grru6aj
Comment
Searle's assumption that computers can not 'truly understand' is poorly grounded. If we (a) believe that humans can truly understand things, and (b) do not believe this understanding comes from an external soul, and (c) believe that humans are composed entirely of a special arrangement of aroms, then we must conclude that it is *possible* for things which have no understanding (atoms) to be arranged in a way which do have understanding. This follows directly from the premises.
Having established it *is* possible for dead things to be conscious when arranged correctly, there is no _a priori_ reason to believe that we have the only possible arrangement. Thus it is at least *conceivable* that computers could be made to have understanding, and Searle is wrong to dismiss the possibility out of hand. My guess is he didn't dismiss it out of hand, but has at least a couple chapters, if not books, devoted to why he thinks this is impossible.
Having not read Searle's books, though, I am nevertheless willing to bet that he's making the same mistake a lot of people make when thinking about Strong AI. I call it: mistaking the substrate for the substance. What I mean is, human intelligence is built on a substrate of neuronal activity, but the substance of our consciousness is different from neuronal activity. Put simply: *we do not act like neurons*. We act like people, despite the fact that neurons are the basis for our brains. Therefore we would not expect a computer intelligence to act like a computer. We would expect that it would have emotion; it could get tired, bored, or distracted; it would forget things; it would confabulate stories in place of memories, etc. It would do all these things and more because those are features of consciousness, not features of neurons or computers. The substance of consciousness is nothing like the substrate of whatever is implementing the intelligence. I don't really have space here to argue why I believe those are necessary features of consciousness - heck, I don't really have space for the argument I am making (which is wasted on being placed in a poor venue), but I do believe that a Strong AI can be a "person," and that science fiction has done us a disservice in how it portrays Strong AI.
youtube
2016-08-08T23:0…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugjz3POZobkRVXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjG9IN2vlvBR3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugi5aXzaXffIlHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw-FuAey90GKenR0Tp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzDN5OW-oFjla9uhl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxge66eMTb_AZyJUc54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyLP2Io6x-05q2cq394AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzymjPO5meCiNJiOwF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxrBsrgNRywvdft_bR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyglFTNl4Wz-5n-nfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"})