MLU (continued): You have mentioned the Chinese Room Argument and I would like to look at that in a bit more detail. For any of our readers who do not know of this (and some of you will), this is a philosophical argument developed by Professor Searle to show that computation is not adequate for causing a mind. The idea is that you have a room and you can feed sentences in Chinese into it. Replies come out of the room on cards with Chinese characters on them. The room seems to be able to have an intelligent conversation with you. For example, you can ask it, in Chinese, what its views on politics are, and it can tell you. Inside the room, however, there is just a filing system and a man. The man gets the Chinese characters that you feed into the room and follows a set of complex rules which involve manipulating a huge filing system of cards with Chinese characters on them, moving all these cards about and ultimately sending cards with Chinese characters out of the room, answering the person outside. The man is "driving" the entire process, but he does not speak Chinese. He may not even know that he is having a conversation in Chinese.
Your argument is that the Chinese room may be able to show understanding of Chinese, but there is no understanding there: the man does not know what is going on. The man is like a computer, following a program without any real understanding. The room is mimicking understanding of Chinese. We might even imagine a room like this running the brain simulation we just discussed: it may require a vast filing system and take centuries to answer a question but that should not be important to the main point. Your point is that there is clearly no understanding in a room like this. The man does not understand what he is doing. Following a program does not imply any understanding or a mind.
Some of our readers will already be thinking of an obvious reply, so I will make it for them. Our instinct may be to look for understanding in the man because he is the most obviously intelligent thing in there, but his role in this is just that of a simple machine component. The room and the man together form a system which has greater understanding than the room by itself or the man by himself. The understanding does not have to be in the man. He should not be expected to know what is going on any more than a neuron in my brain should be expected to know what is going on. What would you say to that?
JS: This question really contains two separate questions, one about behaviorism, and one about entire systems. I will take these in order.
It is possible in principle to build a machine that behaves exactly like a human being but has no consciousness or intentionality. No mental life at all. Indeed, in a small way we are making fragments of such machines with such things as telephone answering machines and various sorts of computerized information processing systems. I could, if I thought it was worthwhile, [I might] program my computer to shout out, "I think therefore I am" -- or whatever appropriate Cartesian behavior would suggest the presence of consciousness, even when there is none. So the possibility in principle of a zombie that behaves just like a human being seems to me something that cannot be ruled out a priori. It is no doubt difficult and perhaps impossible in practice, but in theory it is easy to imagine such a machine.