If and when human-level AI is created, I see no reason to suppose it will romp ahead of us and get rapidly smarter. I think that's based on a false understanding of intelligence, or at least learning. And the dire warnings about being usurped and enslaved by machines are nothing but sensationalist nonsense. Intelligence is nothing to be feared; the smarter we humans become, the more accepting and sensitive we are towards other people, races and species. Conquering and enslaving is what stupid people do.
MLU: Gloves off, Steve: What's wrong with AL research today? What can (or should) be done to make faster, more effective progress in the field?
SG Gloves off? Ah, yes, I was being much too reserved, I can tell... ;-)
Artificial Life, as a science, is pretty much moribund. Several generations of received wisdom and grant-chasing bandwagons have made it too fragmented and stultified. Maybe biology will mop up the remains, now that chemical synthesis can do what we used to have to simulate with computers. Despite a few valiant attempts, I don't think the field sufficiently embraced the concept that I have always used as a mantra: there is no such thing as half an organism. Life is a property of organisation -- a systems-level concept -- so trying to reduce the problem into its component parts without reassembling them into complete systems misses the point. The whole is always greater than the sum of its parts. I think we need less reductionist science and more practical engineering attempts to create complete artificial organisms, both virtual and physical.
As for Artificial Intelligence, it would help enormously if we admitted to ourselves that we don't have a clue how to do it. For a start it would be useful if we made a stronger distinction between "hard" and "soft" AI. Soft AI seeks to automate tasks that humans use intelligence to do, which is laudable but doesn't actually require the machines to be intelligent (for instance I need intelligence to do arithmetic; a pocket calculator can do arithmetic too, but you wouldn't call it intelligent). I think this is very misleading if it's confused, as it so often is in both the public and academic mind, with the attempt to create genuinely intelligent artifacts. At the moment I don't believe we know how to do that beyond a trivial level.
The answer lies, not in computer science but in neuroscience, since the brain is the only example of a fully-working intelligent machine that we have. But we don't know how that works either. I predict that the solutions to the problems of AI will come from computational neuroscience, but we need some changes to the prevailing paradigm before that is likely to happen. People who study the brain need to stop burying their heads in the sand about observations that ought to invalidate their models. I don't have space to give examples, but it's easy to make everyday observations about the brain that completely fly in the face of most existing theories. Again, I think there's an urgent need to take an holistic approach. Too many people work on memory, associative learning, action selection, visual perception or some other subcomponent for years, without realizing that their part of the story makes no sense in relation to the whole. Neuroscience in general tends to get bogged down in the details, and I think that computational neuroscience ought to be to neuroscience in general what Artificial Life was to biology: an attempt to abstract the principles from the detail, without losing sight of any awkward truths.
What a strange coincidence -- this is exactly what I'm trying to do myself! :-)
Read more Machines Like Us interviews here.