Since the dawn of the personal computer, engineers have been striving to enhance their machines’ ability to interact with the world around them.
The first task was to replicate the sights and sounds around them. Originally starting with cartoonish 8-color images and clicking noises, this ability has been refined over the years to the point where we can now type in any address in the United States and get an interactive, 360-degree view from that location using Google Maps. 3-D printers allow us to conjure tangible objects out of plastic in minutes with the mere touch of a button. 3-D TVs can trick my brain into thinking a sword is being hurled at me. It’s all pretty incredible.
But teaching a computer to analyze and understand the real world — a.k.a. Artificial Intelligence — has proven much more difficult. A solid 10 years after I first tried dictating a social studies paper to my computer, I still groan when I hear an automated Customer Service message ask me in Robot-speak to please state my problem. I know “she” is going to screw up.