drupal statistics module

Machines Like Us

Machines Like Us interviews: John Searle

Sunday, 15 March 2009

JS (continued): Now let me add of course that a machine might be doing computational processes in the Turing sense of manipulating zeroes and ones and might also be thinking for some other reason, but that is not, I take it, what the people who think this question is important are asking. What they want to know is: Is computation, as defined by Alan Turing as the manipulation of symbols according to an algorithm, by itself sufficient for or constitutive of thinking? And the answer is no. The reason can be stated in one sentence: The syntax of the implemented computer program is not by itself constitutive of nor sufficient for thinking.

I proved this point many years ago with the so-called Chinese Room Argument. The man in the Chinese room manipulates the Chinese symbols, he gives the right answers in Chinese to the questions posed in Chinese, but he understands nothing, because the only processes by which he is manipulating the symbols are computational processes. He has no understanding of either the questions or the answers; he is just a computer engaged in computational operations.

Paradoxically, the problem is not that the computer is too much of a machine to be able to think, but rather it is not enough of a machine. Human brains are actual machines and like other machines their operations are described in terms of energy transfers. The actual computer you buy in a store is also a machine in that sense but -- and this is the crucial point -- computation does not name a machine process. It names an abstract mathematical process that we have found ways to implement in machines, but unlike thinking, computation is not a machine process.

So, back to the original question, Can a computer think? The answer is yes, because humans are computers and they can think. But to the more important question: Is computation thinking? The answer is no.

MLU: Let me see if I have this straight. You do not seem to be saying that there is any sort of "immaterial soul" or anything beyond our understanding in human consciousness. You seem to accept that brains are physical, that what they do is physical and could be described as computation, and that we could build machines that think, at least in principle. Where you differ from some people who think computers can think seems to be in this idea that thinking is not computation. I think our readers will find a thought experiment helpful here. It may be a bit extreme, but I think it should illustrate your views.

Suppose we could somehow "scan" the brain of a human in as much detail as we wanted -- right down to the molecular level or maybe even the atomic level or beyond, if needed. Let's ignore the uncertainty principle as it is a complication we don't need (unless you think it is relevant). We feed the information that we get from scanning the brain into a computer. We program the computer to construct a model of the brain and simulate its workings -- maybe interacting with the outside world, or maybe interacting with a virtual reality in the same sort of way that we program computers to simulate weather or other physical processes. Would you expect the simulation to act like a human, and would you expect it to be conscious and deserve anything like human rights?

JS: The question you are asking me is essentially this: Would a computer simulation of the brain processes that are sufficient for consciousness itself be sufficient for consciousness? We assume that the simulation is done to any degree of exactitude you like, it could be down to the level of neurons or down to the level of sub-atomic particles, it doesn’t matter for the answer. The analogies you provide are sufficient to answer the question.