Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Searle's assumption that computers can not 'truly understand' is poorly grounded. If we (a) believe that humans can truly understand things, and (b) do not believe this understanding comes from an external soul, and (c) believe that humans are composed entirely of a special arrangement of aroms, then we must conclude that it is *possible* for things which have no understanding (atoms) to be arranged in a way which do have understanding. This follows directly from the premises. Having established it *is* possible for dead things to be conscious when arranged correctly, there is no _a priori_ reason to believe that we have the only possible arrangement. Thus it is at least *conceivable* that computers could be made to have understanding, and Searle is wrong to dismiss the possibility out of hand. My guess is he didn't dismiss it out of hand, but has at least a couple chapters, if not books, devoted to why he thinks this is impossible. Having not read Searle's books, though, I am nevertheless willing to bet that he's making the same mistake a lot of people make when thinking about Strong AI. I call it: mistaking the substrate for the substance. What I mean is, human intelligence is built on a substrate of neuronal activity, but the substance of our consciousness is different from neuronal activity. Put simply: *we do not act like neurons*. We act like people, despite the fact that neurons are the basis for our brains. Therefore we would not expect a computer intelligence to act like a computer. We would expect that it would have emotion; it could get tired, bored, or distracted; it would forget things; it would confabulate stories in place of memories, etc. It would do all these things and more because those are features of consciousness, not features of neurons or computers. The substance of consciousness is nothing like the substrate of whatever is implementing the intelligence. I don't really have space here to argue why I believe those are necessary features of consciousness - heck, I don't really have space for the argument I am making (which is wasted on being placed in a poor venue), but I do believe that a Strong AI can be a "person," and that science fiction has done us a disservice in how it portrays Strong AI.
youtube 2016-08-08T23:0… ♥ 7
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugjz3POZobkRVXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgjG9IN2vlvBR3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugi5aXzaXffIlHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw-FuAey90GKenR0Tp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzzDN5OW-oFjla9uhl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugxge66eMTb_AZyJUc54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyLP2Io6x-05q2cq394AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzymjPO5meCiNJiOwF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxrBsrgNRywvdft_bR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyglFTNl4Wz-5n-nfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"})