Ever since the computer scientist John McCarthy coined the term artificial intelligence in 1955, the field has gone through cycles of boundless optimism and sobering disillusion.
Yet until recently, the supercomputer was the go-to operator of machine intelligence — in science fiction (HAL, in Stanley Kubrick’s “2001: A Space Odyssey”) and in reality (Watson, IBM’s “Jeopardy!” champ).
But three forces have transformed that assumption in the last few years: the surge in data of all kinds, rapid progress in software to find patterns and insights in data, and advances in the technology of data processing, storage and communication.
Now, computing intelligence can be dispersed globally, marshaled and aggregated as necessary, from far-flung data centers in the digital cloud.
- Hawks didn't interview witnesses to ugly hotel incident involving draft pick
- Hawks didn't interview witnesses to ugly hotel incident involving draft pick Frank Clark
- The remarkable redemption of M's prospect Jesus Montero continues in Tacoma
- Woman seeking man she kissed at marathon hears from his wife
- UW's Micah Hatchie signs with Pittsburgh Steelers as undrafted free agent
Most Read Stories
Google led the way, showing the power of data-driven artificial intelligence delivered over the cloud, not only in search, but also in such tasks as language translation and computer vision.
Artificial intelligence run through the cloud is now the dominant approach used by researchers at technology companies, universities and government labs.
“We’re seeing a rebirth of artificial intelligence driven by the cloud, huge amounts of data and the learning algorithms of software,” said Larry Smarr, founding director of the California Institute for Telecommunications and Information Technology.
The emerging global network, Smarr said, will be the equivalent of a “planetary computer.”
What might that mean, in terms of its practical effect on everyday life?
Smarr points to the recent movie “Her” as a fairly accurate glimpse of what will be possible in the not-too-distant future. The protagonist, Theodore Twombly (played by Joaquin Phoenix), has clever software on his smartphone that seems to know all about him.
It has read his email, his text messages and the books, magazines and everything else he has read. It has seen all the movies he has seen. It knows his buying habits and preferences. It retrieves information and answers at his whim. It communicates with him by talking conversationally (in the voice of Scarlett Johansson).
“That’s where we’re headed,” Smarr said. “That kind of hyper-personalized assistance is going to be common in 10 years. It will appear to be on your smartphone or Google Glass, but it will actually be in the cloud.”
Some predict that we are headed much further. Ray Kurzweil, an inventor, scientist and futurist, joined Google in 2012 to work on an artificial-intelligence project known internally as Google Brain.
Kurzweil has embraced a concept called “the singularity,” which is essentially when computing intelligence surpasses human intelligence — not just on isolated pursuits like playing chess or “Jeopardy!,” but really leaving human intelligence in general in the dust.
Kurzweil wrote a 2005 book on the subject, “The Singularity Is Near,” and welcomes the prospect, asserting that the supersmart digital intelligence will enrich the life of humans.
Others are skeptical, both that it will happen and that it will be a good thing if it does.
But in any case, the singularity is some ways off. Kurzweil puts it at 2045.
Jeff Dean, a research fellow at Google, focuses on accelerating the progress of artificial intelligence in such tasks as computer vision and understanding the meaning of words.
Until a few years ago, for example, Google image searches were executed mainly by identifying the text labels affixed to pictures.
Today, many images are identified by software analyzing the patterns of digital pixels in a picture or video. And, Dean said, the technology can pick out a leopard in a picture and know it is not a lion or a cheetah, recognizing the distinctive pixel patterns of various big cats.
Mobilizing the firepower of Google’s large cloud data centers, Dean said, enables his team to “bring a lot of computation to bear on these kinds of problems.”
Words and meaning
Understanding not just words, but also their context and meaning is another big challenge. Current search technology does a good job of responding helpfully to a few words, either typed or spoken.
So, Dean noted, when planning a trip to Italy, search engines do well with “train from Rome to Florence” or “hotel in Florence.”
The ideal, Dean explained, would be to tell Google that you want to plan a two-week vacation to Italy. Then the smart technology starts working on the trip.
The options it offers are based on its ability to understand Italy and the traveler, like someone who has volunteered personal information. Maybe you have two young children, want to stay in the country in Tuscany and like to hike.
His team’s advanced artificial-intelligence research, known as deep learning, is “loosely inspired by knowledge of how the brain works,” Dean said.
But there are things the human brain does that silicon-based computing still only aspires to.
The brain, Dean noted, is amazingly flexible and efficient, firing up and shutting down memory systems, so that the part of your brain that holds information on English literature or taking out the trash shuts down when you look at a picture of a leopard.
“We don’t have a great handle on how to build those kinds of dynamically evolving memory systems,” Dean said. “Google and others are working on that, but it’s really nascent.”