Thursday, January 19, 2006

Does Searle Beg the Question?

Ray Kurzweil says yes. His response, available at the URL above, is essentially that cricitism.

What is it about Searle's argument that can seem convincing? It is convincing because he places a human -- someone who for all that it contributes to the question might as well be ourselves -- at the centre of his example.

If you, personally, could not be regarded as "understanding chinese", the argument is that nobody and nothing could. The example is posed in such a way that the primary agent within the computer -- you -- is effectively failing to comprehend the meaning of the exchanges carried out. As a real computer is driven not by an agent, but by physical interactions only, it seems no more likely to be a conscious system than the hypothetical Chinese room.

Kurzweil effectively argues from a position which I myself adopt -- that the brain is demonstrably a machine, and it is absurd to argue that a brain cannot understand language.

However, it seems obvious that in the Chinese room example, the human agent genuinely fails to understand Chinese. Is the same thing true for the overall system? Is it a problem of design rather than necessity? What, then, is it that makes the Chinese room brain an uncomprehending system, but the human brain so wonderful?

Kurzweil offers no direct answers, other than to suggest that the problem is not a logical one (i.e. some contradiction in the very concept of a conscious computer) but an in-the-world one -- i.e. a problem with particular computers.

His argument is, I think, interesting. It seems to me to be useful in refuting the supposed logic of Searle's Chinese room.



Anonymous Anonymous said...

Olie NcLean here,

The first version of the Chinese Room that Kurzweil picked is a particularly bad example - there's a computer in the room performing any of a whole set of things on the sentences. This immediately means that the "systems response" is appropriate - although the English Guy in the room might not understand any Chinese, there's something in the room that does, and so The room understands Chinese.

The variant on page 2 is again a bad example - there are any number of calculations happening, so there is room for some sort of active intelligence to slip in on the side.

I accept that there can be machine sapience. It might even be possible to test for it in some way. But the Turing Test can be fooled, and Kurzweil's arguments are diversionary

1/19/2006 04:50:00 PM  
Anonymous Anonymous said...

interestingly enough, although Kurtzweil attacks Searle's aguments (and adds in some ad-hominem to boot), he seems to agree with the implication of the Chinese Room example - that the ability of the room to answer questions is no proof of conscious understanding (which is what would imply consciousness, no?).

So what Kurtzweil has done seems to be this: (1) attack the feasibility of the room to answer Chinese questions (2) Say that if the overall system can answer Chinese questions, it can "understand Chinese"*

* doesn't understanding imply a conscious understanding, ie, Consciousness?

(3) Attack Searle for not accepting that we can't test entities for consciousness- eg: the snail (4) Attack Searle for being anti-machine

Searle is anti-machine, and deserves an intellectual flogging for it, but that doesn't make Kurzweil's first two moves any more rational

1/19/2006 05:26:00 PM  
Blogger Mitchell said...

"the brain is demonstrably a machine"

This is the root of the problem - an atomistic physical ontology which by construction does not contain perceptual qualities, "gestalts", intentional states, and probably many other ontological aspects of consciousness. The controversy over Searle's argument is actually a minor symptom of an incredibly profound problem, namely the extent to which human beings still have only the most superficial grasp on the nature of reality. The one reason for optimism is that we've discovered entanglement, a strictly physical phenomenon which shows that naive pursuit of atomism and reductionism runs into troubles. That at least puts us beyond the 19th century, when you had atomistic materialism, German idealism, and no hope of communication between the two. But we still have a very long way to go.

1/20/2006 11:32:00 AM  
Blogger MelbournePhilosopher said...

All good comments, I think.

The mystery, as I hope I alluded to and which Mitchell definitely has picked up on, is how machines make minds.

My example for this is the brain. However, I also accept that our science doesn't have the ontological tools in order to describe this process.

It's easy to "prove" that minds don't reduce to our current physics, because as he says our current physics contains no description of consciousness.

I do not, by claiming the brain in a machine, attempt to claim that the mind reduces to our current understanding of physics. Rather, I enter it as physical proof against both sides of the materialist / idealist debate.

Materialism must be ontologically expanded before it can account for consciousness; idealism must recognise the clearly physical instantiation of a mind.


1/20/2006 12:33:00 PM  
Blogger Clark Goble said...

I think you misunderstand Searle's point. I can't see him denying that the brain is a machine. (See his Rediscovery of the Mind for instance, where his other example is the difference between a gas engine and a simulation of an engine)

I think Searle's point is simply that people who think the brain is just a machine like a lawn mower are missing something fundamental. That there is something about the parts that has been neglected.

Now Searle's problem is that he can't really offer an explanation of what this is, yet he rejects both radical emergence and variations of vitalism or panpsychism. So I think one is justified in thinking Searle's approach is problematic.

But I do think he is often misunderstood.

1/20/2006 02:15:00 PM  

Post a Comment

Links to this post:

Create a Link

<< Home