MLU: I certainly hope so. Wouldn't it be interesting if a sentient machine was constructed in such a manner that its consciousness actually did cause behavior? If that was accomplished, how might its behavior differ from ours, and what advantage (if any) might it have?
SP: That would depend entirely on how you chose to build the machine, I suppose. You could, if you really wanted to, construct something in which the visual-type 3-D electromagnetic field evoked by an orange was hooked up to cause the firing of a machine gun or something -- the ultimate clockwork orange! But why would you want to?
When it comes to advantages, I'm a great believer in the efficiency of evolution. We humans have evolved so that the only time we're conscious of every little movement making up a bodily action is when we're learning how to perform that action. At that stage, the action proceeds very slowly and jerkily. But as soon as we learn how to do it, we stop being conscious of the details and the action flows much more quickly and smoothly. So in that sense, you could say that consciousness is actually disadvantageous. Except that it does in some way seem to be necessary for learning and the handling of novelty.
MLU: I'm sure you are aware of the work of Giulio Tononi and his integrated information field theory of consciousness (not to be confused with an electromagnetic field theory). Essentially, he proposes that the amount of consciousness an entity has is equal to the amount of information processing it contains, and that this information is highly and innately integrated into one's mind (i.e., the color of an orange can't be separated from it's shape). The more information processing "horsepower" an entity has, the more "conscious" it becomes.
Similarly, Brian Pollard has built imaging equipment capable of constructing 3-D movies of brain changes while it slips into unconsciousness under anesthesia.
Both cases support the notion that consciousness is not an all-or-nothing state, but rather like a "dimmer switch," which might be dialed up or down--and even measured.
Although the researchers in these examples do not approach the mystery of consciousness from the same angle you do -- electromagnetic fields -- what is your reaction to their work and the conclusions they reach?
SP: Whew, that's a very multi-part question. OK, one part at a time.
First of all, with genuinely the greatest of respect to Tononi, who did some truly kick-ass experimental work in Edelman's institute, the "integrated information" theory seems to me nothing more than a bit of good old, mom-'n'-apple-pie hand-waving. Who could disagree with the idea that the more information processing a brain is capable of, the more consciousness it's capable of generating? But the question is, where does this get us? It doesn't tell us anything about how the brain generates consciousness, what kind of information processing produces consciousness and what kind doesn't, let alone what consciousness actually is. I mean, even the ancient PC I'm typing this response on is capable of enormous amounts of integrated information processing, but nobody thinks it's conscious. As for the specific suggestion that the colour of an orange can't be separated from its shape -- well of course it can. Damage the colour area of the visual cortex and the orange still has shape but no longer has any colour.