drupal statistics module

Machines Like Us

Can machines be programmed to make moral judgments?

Thursday, 29 November 2012
by Mano Singham

Gary Marcus predicts that in a few decades, we may all not only have the option of traveling in driverless cars, we may even be obligated to do so.

Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.

Of course, that raises the interesting question of who is responsible and liable if there is an accident.

But an even more difficult problem that will need to be addressed is that automated cars bring with them the need for ethical decision-making systems that can be programmed.

Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

Moral and ethical decision-making is already fraught with seemingly insurmountable hurdles even for humans. Recall the experiments with runaway trolleys and the like where people are confronted with choices where each option is hard to justify rationally. How much harder will it be to write computer programs to automate them?

As Marcus points out, Isaac Asimov took a shot at making a set of rules for robots but any set of rules, however good they may look on paper, has the potential to turn into a disaster if unthinkingly applied in all situations.