PA: I simply hope that my own work will create systems as smart as I can make them. AI research in general will ultimately create things that are more intelligent than humans. Some people say that you cannot validly say this -- because AI systems would be a “different” form of intelligence and that we cannot make such comparisons. This is not true. We should be able to imagine some computer so intelligent that, even if it is “different,” its own view of the world could easily encompass an understanding of how humans behave well enough that it could predict human behavior -- and maybe mimic humans -- as well as any human does.
If you show an AI system lots of videos of human beings and place it a situation where it has some motive to impersonate humans, then, if it is intelligent enough, it should be able to do so, regardless of whether it possesses “human psychology” itself. If it can’t do it, then we should be able to construct a more intelligent machine that can.
Once a machine can flawlessly pretend to be like us, it would be impossible to claim that it is “different” -- rather than “as intelligent” or “more intelligent” than we are. This would hold true especially if the machine is able to debate the matter with us -- while simultaneously performing other tasks, such as statistically modeling what we may do in 3 million different situations, and writing 3,000 novels to be sold to humans.
We, however, may use the same technology to increase our own thinking abilities, and for a really advanced civilization it may not be meaningful to distinguish between “individuals” in that society and its technology. For example, in some future civilization it may be impossible to distinguish between imagining something and programming a simulation: what we think of as “programming” may become a special case of the society’s thought process. Once technology is advanced enough to make very fast computers, I cannot see any limit to it.
Read more Machines Like Us interviews here.