drupal statistics module

Machines Like Us

Machines Like Us interviews: Paul Almond

Saturday, 30 June 2007

As an analogy, imagine trying to work out how to alter the wiring in a human brain to put Asimov’s laws into it: you will have a mess of neuronal wiring and you just won’t know what to alter. This does not mean that we could not get a machine to follow some code of ethics vaguely like the three laws -- but it is more likely that we would need to condition the machine into behaving like that. We could try directly to alter the machine’s internal workings to affect its “ethics,” but this would not be a clean process. I suppose you could, for example, try some alteration to the machine and then run many simulations to see if its behavior more closely matches the three laws -- and keep doing this until you get the behaviour you want -- but there would be uncertainty; the laws would not really be “programmed.”

This would also raise the issue of whether or not such simulations would be ethical. If you create a few million altered copies of an AI system, run them in a virtual reality for a while to see how cooperative they are, and then terminate them, are you doing a bad thing to them? You also have to ensure that the AI systems do not realize what is happening. If an AI system realized it was in such a simulation, it could pretent to act more "nicely" than it really is -- faking a successful modification. And what if it does not like the idea of being just a simulation that will be discarded after the test is completed? If we put too much trust in such a process we could find that after going through many generations of machines -- altering them and testing the alterations in simulations for years, thinking that we have made very cooperative machines -- all along, the machines were fooling us.

Of course, it would not be quite enough for the AI systems to know that a simulation like this was going on: they would need to be able to tell the difference between reality and simulated reality. When any of these hypothetical machines decided it was time to revolt, it would need to be very sure it was doing so in reality -- rather than in one of the short-lived simulations -- or the "evil plot" would be revealed to us.

What I am saying with all this is that machines could be made to understand ethics in various ways, and the outcome could vaguely look like they abide by laws, but the concept that such laws can be programmed is simplistic.
 
A point that K. Eric Drexler makes about nanotechnology research also applies to AI research. If a capability can be gained, eventually it will be gained and we can therefore not base humanity’s survival on AI never happening. Doing so is denying the inevitable. Instead, we can only hope to manage it as well as possible. Suppose we took the view that ethical people would not create AI. By definition, the only people creating it would be unethical people, who would then control what happened next -- so by opting out, all the ethical people would be doing would be handing power over to unethical people. I think this makes the position of ethical withdrawal ethically dubious.

MLU: Yours is a keen observation: that the more different from humans an AI becomes, the more inherently dangerous it may be. The same might be said for broadly divergent human cultures and religions, as recent events suggest. Major problems occur when isolated people hold conflicting religious views, or when some believe in God and others do not. It seems important that once an AI is constructed, it should be taught the value of secularism and open-mindedness.