I think computers could develop background sense if they learned rather than just being programmed. The issue of how many of the possible cognitive systems that could exist will care about humans applies to humans as well -- every day we are exposed to the risk that people around us could put us in danger, but at least we know that other humans were made in much the same way as us and have had the same kinds of learning experiences. We have a lot of experience in dealing with other people and we do not need to worry about other humans having intelligence far above our own -- yet. We cannot be so sure about machines. We should take this issue seriously. An AI should be treated with caution, not because it is inherently aggressive or would automatically seek to destroy us, but because it is different.
Each one of us has an evolutionary heritage leading up to our birth, and a heritage of experiences that shaped us after our birth, that makes us see things a certain way. We should probably try to ensure that the factors determining how an advanced AI system views us -- the way it works, the learning experiences it has -- are as similar as possible to those that determine how we view other people. We need to be cautious about AI and respect what we are dealing with. Oh, and forget Isaac Asimov’s Three Laws of Robotics: they are a non-starter.
MLU: Why do you think so?
PA: When I say “forget the Three Laws of Robotics,” really I mean this idea that we can somehow program ethics. The three laws may sound simple, but they contain complicated, abstract ideas such as “robot,” “injure,” “human being,” “action,” “inaction,” and “harm.” Whether you are writing the laws in English or in some abstract, mathematical way makes no difference: you will still have to describe these things and I don’t think this is practical. I am just saying that it would be hard to specify them without leaving dangerous loopholes: I do not even think we could get remotely close to these laws. We would not know how to set up a machine with anything like the understanding it needed.
This is not the case for Asimov’s three laws alone: I don’t think it is practical to try to build understanding of any sophisticated concepts into a machine. This is why I, and others, think we need to use emergent processes in which machines start off simple and learn by themselves through experience. An obvious reply to this would be to ask why we cannot allow a machine to learn about the world, so that it contains enough abstraction, and then program the three laws into it, so that we can use the understanding already present. That won’t work either, because by this time the machine will be so complicated that we won’t know how to change it to put the 3 laws into it. We won’t know, for example, how its concept of “human” is represented.