Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So (III)

In a widely read article published early this year on arXiv.org, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: “Is deep learning approaching a wall?” He wrote, “As is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.”
If the reach of deep learning is limited, too much money and too many fine minds may now be devoted to it, said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence. “We run the risk of missing other important concepts and paths to advancing A.I.,” he said.
Amid the debate, some research groups, start-ups and computer scientists are showing more interest in approaches to artificial intelligence that address some of deep learning’s weaknesses. For one, the Allen Institute, a nonprofit lab in Seattle, announced in February that it would invest $125 million over the next three years largely in research to teach machines to generate common-sense knowledge — an initiative called Project Alexandria.
While that program and other efforts vary, their common goal is a broader and more flexible intelligence than deep learning. And they are typically far less data hungry. They often use deep learning as one ingredient among others in their recipe.
“We’re not anti-deep learning,” said Yejin Choi, a researcher at the Allen Institute and a computer scientist at the University of Washington. “We’re trying to raise the sights of A.I., not criticize tools.”
Those other, non-deep learning tools are often old techniques employed in new ways. At Kyndi, a Silicon Valley start-up, computer scientists are writing code in Prolog, a programming language that dates to the 1970s. It was designed for the reasoning and knowledge representation side of A.I., which processes facts and concepts, and tries to complete tasks that are not always well defined. Deep learning comes from the statistical side of A.I. known as machine learning.
Benjamin Grosof, an A.I. researcher for three decades, joined Kyndi in May as its chief scientist. Mr. Grosof said he was impressed by Kyndi’s work on “new ways of bringing together the two branches of A.I.”
Kyndi has been able to use very little training data to automate the generation of facts, concepts and inferences, said Ryan Welsh, the start-up’s chief executive.
The Kyndi system, he said, can train on 10 to 30 scientific documents of 10 to 50 pages each. Once trained, Kyndi’s software can identify concepts and not just words.
In work for three large government agencies that it declined to disclose, Kyndi has been asking its system to answer this typical question: Has a technology been “demonstrated in a laboratory setting”? The Kyndi program, Mr. Welsh said, can accurately infer the answer, even when that phrase does not appear in a document.