BG: As human beings carrying out our lives in the everyday world, we use a certain vocabulary for describing and thinking about ourselves. Many of the concepts involved in this vocabulary are what philosophers call "folk psychology" -- i.e., concepts without any rigorous grounding in reality. Examples are free will, consciousness and self. The everyday interpretations of these terms are just full of contradictions and confusions. These concepts, in their standard forms are not at all useful to the AGI designer -- in fact they're damaging and distracting. These concepts can be refined into useful concepts, but it takes a lot of work. In the Novamente design there is something called "attentional focus" which is related to consciousness; there is agentive causal inference that is related to what humans do when they ascribe will to themselves or others; there is a notion of a psychosocial self as a pattern a system recognizes in its own behavior. But these rigorous concepts we use in Novamente theory are very different from the cruder, less coherent concepts used in everyday discourse.
On a more personal level, I think humans have a lot of deep-seated illusions about their own lives that are rooted in taking these folk psychology concepts too seriously. The notion of free will is one of the most absurd and dangerous ones. The idea that there is some "me" in my head somehow "deciding" stuff is quite absurd, yet it's how all of us feel intuitively sometimes -- due to what combination of innate neural wiring and cultural conditioning, no one is quite certain. These sorts of cognitive illusions are something I strive to overcome in my own life, just as I strive to overcome basic errors of probabilistic reasoning as have been identified by psychologists working in the area of Heuristics and Biases.
The human brain is a wonderful machine but it has a lot of problems -- it often assesses probabilities badly wrong even when it has adequate information; it often uses a badly false model of itself, including largely bogus concepts like "free will." This is one of the reasons I don't think AGI researchers should strive to precisely emulate the human brain. Believe me, we can do better!!! Human brain emulation is important and interesting, because there is a lot to learn from the brain, and because a lot of us humans would like to see ourselves emulated for personal and aesthetic reasons. But I believe we can make AGI's with much more intelligence than humans, and much greater ethicality and reliability as well.
MLU: As I understand it, your approach to AI centers around writing algorithms that will eventually control embodied agents in rich virtual worlds such as Second Life, where they will be constrained by physical laws and can interact with real people in a wide variety of situations. You hope to begin with primitive, infant-like agents -- limited but flexible autonomous exploratory systems -- that will learn over time and grow to achieve human-level intelligence, and more. Please tell us more about this aspect of Novamente. How far along are you with this project?
BG: I often say there are four key aspects to creating a human-level AGI:
- Cognitive architecture (the overall design of an AGI system: what parts does it have, how do they connect to each other)
- Knowledge representation (how does the system internally store declarative, procedural and episodic knowledge; and now does it create its own representation for knowledge of these sorts in new domains it encounters)
- Learning (how does it learn new knowledge of the types mentioned above; and how does it learn how to learn, and so on)
- Teaching methodology (how is it coupled with other systems so as to enable it to gain new knowledge about itself, the world and others)