MLU: An autonomous system by definition is under its own control, which has its risks. Tell us a little about designing Novamente's psychology of empathy.
BG: It's based on simulation. A Novamente system builds a little internal simulation of each agent it interacts with. So it empathizes with you because it has a little subsystem inside itself that tries to BE you. This is, in broad strokes, how human empathy works too. But I think a machine can ultimately be more empathic than a human, because it won't make as many stupid or emotionally-biased cognitive errors in assessing what it's like to be someone else. A machine, being smarter and less biased, can actually better put itself in someone else's shoes, and thus be more empathic. But it has to be wired to want to put itself in others' shoes -- which Novamente is.
MLU: How does Novamente differ from other AI research projects, and why do you think yours will succeed when others fail?
BG: Basically, we got the AGI design right, and I don't know of anyone else who has. The design is based on a sound, coherent philosophy of mind; it's computationally scalable, and it's engineered well by a great team of programmers. And, the methodology of teaching the system via embodying it in virtual worlds makes an awful lot of sense.
MLU: Novamente is a commercial as well as a research venture. How are you paying the bills? What are your future business goals?
BG: From 2001 thru 2006 we've been paying the bills via doing an unholy variety of software consulting gigs, in various domains like data mining, bioinformatics, natural language processing, computational finance, and so on.
In early 2007 we shifted gears and decided to focus single-mindedly on the virtual agents domain -- for two reasons. One, it is more harmonious with our long-term AGI goals than the other business areas in which we were doing consulting. Two, purely from a business perspective, it's common sense that focusing on a narrower vertical market niche is a better way for a small software firm to make money.
MLU: You are also Director of Research for the Singularity Institute. Why are you associated with the Institute, and what are its research program goals?
BG: Novamente is narrow-focused on creating a thinking machine and rolling it out in a series of exciting products in virtual worlds.
The Novamente team cares about ethics and thinks about it, but still we're first and foremost concerned with making the AGI.
One of the things I think is critical about SIAI is that it focuses a lot of attention on the broader ethical issues -- on how to maximize the odds that AGI's, once they're created, are positive and beneficial forces. This is a complex issue with many aspects including scientific and sociopolitical ones.