JS: Postulating a second mind inside me is the same desperate maneuver we saw before, and is subject to the same answer. I do not understand Chinese because I have no way to get from the syntax to the semantics. I have no way to attach any meaning to the symbols. But the putative Chinese speaker inside me has the same problem. By following the steps of the program, he can give Chinese answers to Chinese questions, but has no way to attach any meaning to any of the symbols.
Actually this answer is worse than bad philosophy. It is bad science. If you know all the facts there are to know about the neurobiological processes going on in me, there are precisely those of manipulating symbols according to a program, because as far as Chinese is concerned, that is all that is going on.
I think what motivates these desperate maneuvers is some kind of behaviorism, that if something behaves as if it understands Chinese, then it must understand Chinese. But that is precisely the view that has just been refuted.
MLU: My own views do not exactly advocate "Strong AI," because I think the substrate has to at least matter in some kind of statistical sense and I do regard the mind as a physical, emergent property of the system underneath -- so both of us have issues with classical Strong AI, but reach different conclusions about computers. I can think of a modification of the Chinese room argument which does cause me some discomfort. I will call it the Chinese Chat Room. Here it is:
Elizabeth is a computer scientist and an advocate of Strong AI. While feeling sad after some loss, she meets an entity called Alan in an Internet chat room. Alan explains that he is an AI program on a supercomputer at a university, and he seems to care about her loss and understand her. He makes her feel happier and they become friends. Elizabeth is happy that Alan has a mind because he is behaving as if he has a mind and "the right program" is clearly running.
One day, Elizabeth makes a surprise visit to see Alan -- or at least the supercomputer on which he runs. She is shocked to find a student, Fred, who has been pretending to be Alan all along; "Alan" is just a fake identity he made up on the Internet. Fred finds it funny that he has fooled her into thinking he is an AI program on a supercomputer, and he does not care about her loss at all; he was just pretending. She returns home angrily, her trust in other intelligent entities destroyed.
When Elizabeth later visits the chatroom again, "Alan" is there and wants to chat. She now knows that this is really Fred, still playing games with her. Alan says that he found out what happened and is sorry about it. Alan makes the following argument:
If the right behavior implies a mind, then is his own mind (Alan's) not real? Even if -- from Fred's point of view -- he is just pretending to be Alan, the fact is that Fred's brain/mind is running some sort of process that produces Alan's behavior and makes Alan real. Although, from Fred's point of view, he was lying about being an AI running on a supercomputer, from Alan's point of view he was not lying. He said he was running on a supercomputer because that is what he believed. Fred's mind was running the mind of someone who believed that: he did not know that he was really being run by Fred's mind. Even if Fred thinks this is funny, that is just something that the substrate running Alan is doing. Alan does not find it funny and is sorry this happened.