drupal statistics module

Machines Like Us

Artificial intelligence and moral dilemmas

Saturday, 30 June 2012

Don Brandes writes, "It is estimated that by 2020 a $1,000 dollar computer will have the processing power to match the human brain. By 2030 the average personal computer will have the processing power of a thousand human brains." ["Moral Dilemmas of Artificial Intelligence"] This potential processing power raises the question about whether computers will ever achieve sentience. Richard Barry states, "It is an enormous question that touches religion, politics and law, but little consideration is given to [the] dawn of a new intelligent species and to the rights an autonomous sentient being could [be entitled to]. For a start, it would have to convince us that it was truly sentient: intelligent and able to feel (although it is debateable whether its feelings would mirror our own)." ["Sentience: The next moral dilemma," ZD Net UK, 24 January 2001]

Not everyone believes that computers will become sentient. In an earlier post about the history of artificial intelligence, I cited a bit.tech article which noted that Professor Noel Sharkey believes "the greatest danger posed by AI is its lack of sentience rather than the presence of it. As warfare, policing and healthcare become increasingly automated and computer-powered, their lack of emotion and empathy could create significant problems." ["The story of artificial intelligence," 19 March 2012] The point is, whether computers achieve sentience or not, moral dilemmas are going to arise concerning how we apply artificial intelligence in the years ahead.

A recent article in The Economist asserts, "As robots grow more autonomous, society needs to develop rules to manage them." ["Morals and the machine," 2 June 2012] How to give "thinking machines" a moral grounding has been a matter of concern from the conception of artificial intelligence. The article begins with a well-known AI computer -- HAL:

"In the classic science-fiction film '2001', the ship’s computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship's mission (investigating an artefact near Jupiter) and to keep the mission's true purpose secret from the ship's crew. To resolve the contradiction, he tries to kill the crew. As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was."

Read more