Among the handiest villains in science fiction are Computers That Know Too Much. Think of the dream-weaving despots of "The Matrix" or murderous...
TROY, N.Y. — Among the handiest villains in science fiction are Computers That Know Too Much. Think of the dream-weaving despots of “The Matrix” or murderous HAL in “2001: A Space Odyssey.”
But in reality, even the most super supercomputer lacks the reasoning capacity of a child engrossed in a Dr. Seuss book. Computers can’t read the way we do. They can’t learn or reason like us. Narrowing that cognitive gap between humans and machines — creating a computer that can read and learn at a sophisticated level — is a big goal of artificial-intelligence researchers.
The Pentagon’s Defense Advanced Research Project Agency, or DARPA, granted a contract worth at least $400,000 last fall to two Rensselaer Polytechnic Institute (RPI) professors who are trying to build a machine that can learn by reading.
Most Read Stories
- Seahawks' Richard Sherman, dozens of athletes respond to Trump's rant against NFL player protests
- GOP’s know-nothing approach to health care is symptom of a bigger disease | Danny Westneat
- A daring betrayal helped wipe out Cali cocaine cartel
- Seahawks, Titans stay in locker room during national anthem prior to Sunday's game in Tennessee WATCH
- Pete Carroll responds to Trump comments, backs Seahawks: 'We stand for our players and their constitutional rights'
The academics hope to create a machine that can read sections of textbooks and answer questions based on the material. Down the road, professor Selmer Bringsjord believes such artificial intelligence, or AI, machines might be able to read military plans or manuals and adjust on the fly in the heat of battle.
“We have such a complex military now, it’s so high-tech, we need AI to help us,” said Bringsjord, director of RPI’s Artificial Intelligence and Reasoning lab. “There’s no going back.”
AI is ingrained in our lives, from programs used by banks in evaluating potential borrowers’ credit ratings to software that suggests corrected spellings for unrecognized words to programs that mine databases seeking nonobvious relationships.
But reading is difficult for machines. Sentences must be converted into formal logic equations or other computer-friendly formats. Computers can do this on a modest scale. What has proved more elusive, however, is software that can make heads or tails of the verbal thicket contained in sentences like this one.
“Natural language is very ambiguous,” said Boris Katz of Massachusetts Institute of Technology’s Artificial Intelligence Laboratory. “If you go beyond sentences like ‘John loves Mary,’ to something like a paragraph from The Wall Street Journal … there are some pretty complex phenomena in language that are pretty hard to represent.”
Bringsjord and fellow RPI professor Konstantine Arkoudas want to create algorithms, or mathematical formulas, that allow their “Poised-for-Learning” machine to convert sentences into formal logic. The next step would be to create an additional set of algorithms that would allow the machine to use the information it takes in to figure things out. To reason, in other words.
For example, if the machine reads up on the planets, it should be poised to answer the question “What is the largest planet?” even if the text never explicitly states that Jupiter is.
The DARPA grant is for a year, with options to extend it into a three-year $1.2 million contract. Bringsjord hopes to have the Poised-for-Learning machine reading basic texts, such as algebra and astronomy, in three years.
DARPA spokeswoman Jan Walker said the grant is not tied to any particular Pentagon program but part of the agency’s larger interest in cognitive systems. Ronald Brachman, director of DARPA’s Information Processing Technology Office, has talked openly about the military’s “computer-permeated future.”
“In order to succeed, we’ll need systems that can remember where they’ve been and what they’ve seen and improve themselves over time,” Brachman said at a conference last year.
The connection between soldier and computer — battlefield laptops control backpack-size aerial surveillance drones and other computers let combat troops see the location of friendly units on digital displays — is expected to tighten in coming years.
A downside to heavy reliance on technology is that machines that might be asked to help make battle-related decisions can’t adjust to quickly changing conditions in the field, Bringsjord said.
Bringsjord envisions AI robots of the future taking in information in real time, by either reading or listening to spoken instructions. He said that once a machine has vacuumed up all the relevant cultural, historical and geographical data about an area, an officer could say, “Here’s the current situation in Fallujah. Go scout it out.”
It’s not that far-fetched. Machines can be considered cognitive, depending on your definition of the word.
In Austin, Texas, Cycorp has been building a “knowledge base” called Cyc (pronounced “psych”) with the goal of becoming a repository of human knowledge that can make intelligent decisions.
Cycorp’s vice president of research, Michael Witbrock, said Cyc can reason based on the 2.5 million assertions in its system, such as inferring what sort of salary you’re likely to have based on your job.
Machines exist that understand spoken words, recognize faces and make inferences based on experience, says Carnegie Mellon University computer-science professor Tom Mitchell.
But Mitchell, past president of the American Association of Artificial Intelligence, offers a big caveat: Even though researchers have made progress in different areas of cognition, there is still a big mystery about how the pieces go together.
In other words, worries about an all-knowing computer might be premature. Katz thinks a computer that can reason at the level of even a toddler is far off. “I’m still looking for that common sense these 3-year-olds have,” Katz said. “And we don’t have it yet.”