John Searle is Slusser Professor of Philosophy at the University of California, Berkeley, and has made notable contributions to the philosophy of language and the philosophy of mind. He was awarded the Jean Nicod Prize in 2000 and the National Humanities Medal in 2004. Professor Searle is well-known for his criticism of the idea that artificial intelligence research will lead to conscious machines, and in particular for his famous Chinese Room Argument.
Interview conducted by Paul Almond.
MLU: Professor Searle, thank you for joining us. I'll get straight to the issue that Machines Like Us readers will be interested in: can a computer think?
JS: It all depends on what you mean by “computer” and by “think.” I take it by "thinking" you mean conscious thought processes of the sort which I am now undergoing while I answer this question, and by “computer” you mean anything that computes. (I will later get to a more precise characterization of “computes”). So construed all normal human beings are thinking computers. Humans can, for example, do things like add one plus one to get two and for that reason all such human beings are computers, and all normal human beings can think, so there are a lot of computers that can think, therefore any normal human being is a thinking computer.
People who generally ask this question, can computers think?, really don’t mean it in that sense. One of the questions they are trying to ask could be put this way: Could a man-made machine -- in the sense in which our ordinary commercial computers are man-made machines -- could such a man-made machine, having no biological components, think? And here again I think the answer is there is no obstacle whatever in principle to building a thinking machine, because human beings are thinking machines. If by “machine” we mean any physical system capable of performing certain functions, then all human beings are machines, and their brains are sub-machines within the larger machines, and brains can certainly think. So some machines can think, namely human and many animal brains, and for that reason the larger machines -- humans and many animals -- can think.
But once again this is not the only question that people are really asking. I think the question they are really trying to ask is this: Is computation by itself sufficient for thinking? If you had the machine that had the right inputs and outputs and had computational processes between, would that be sufficient for thinking? And now we get to the question: What is meant by “computational processes”? If we interpret this in the sense that has been made clear by Alan Turing and his successors, where computation is defined as formal operations performed over binary symbols, (usually thought of as zeroes and ones but any symbols will do), then for computation so defined, such processes would not by themselves be sufficient for thinking. Just having syntactically characterized objects such as zeroes and ones and a set of formal rules for manipulating them (the program) is not by itself sufficient for thinking because thinking involves more than just manipulating symbols, it involves semantic content. The syntax of the implemented computer program is not by itself constitutive of, nor is it by itself sufficient to guarantee the presence of, actual semantic content. Human thought processes have actual semantic content.
Interview continued on the following pages: