The Most Powerful A.I. Systems Still Do Not Understand, Have No Common Sense, and Cannot Explain Their Decisions

(p. B1) David Ferrucci, who led the team that built IBM’s famed Watson computer, was elated when it beat the best-ever human “Jeopardy!” players in 2011, in a televised triumph for artificial intelligence.

But Dr. Ferrucci understood Watson’s limitations. The system could mine oceans of text, identify word patterns and predict likely answers at lightning speed. Yet the technology had no semblance of understanding, no human-style common sense, no path of reasoning to explain why it reached a decision.

Eleven years later, despite enormous advances, the most powerful A.I. systems still have those limitations.

. . .

(p. B7) The big, so-called deep learning programs have conquered tasks like image and speech recognition, and new versions can even pen speeches, write computer programs and have conversations.

They are also deeply flawed. They can generate biased or toxic screeds against women, minorities and others. Or occasionally stumble on questions that any child could answer. (“Which is heavier, a toaster or a pencil? A pencil is heavier.”)

“The depth of the pattern matching is exceptional, but that’s what it is,” said Kristian Hammond, an A.I. researcher at Northwestern University. “It’s not reasoning.”

Elemental Cognition is trying to address that gap.

. . .

Eventually, Dr. Ferrucci and his team made progress with the technology. In the past few years, they have presented some of their hybrid techniques at conferences and they now have demonstration projects and a couple of initial customers.

. . .

The Elemental Cognition technology is largely an automated system. But that system must be trained. For example, the rules and options for a global airline ticket are spelled out in many pages of documents, which are scanned.

Dr. Ferrucci and his team use machine learning algorithms to convert them into suggested statements in a form a computer can interpret. Those statements can be facts, concepts, rules or relationships: Qantas is an airline, for example. When a person says “go to” a city, that means add a flight to that city. If a traveler adds four more destinations, that adds a certain amount to the cost of the ticket.

In training the round-the-world ticket assistant, an airline expert reviews the computer-generated statements, as a final check. The process eliminates most of the need for hand coding knowledge into a computer, a crippling handicap of the old expert systems.

Dr. Ferrucci concedes that advanced machine learning — the dominant path pursued by the big tech companies and well-funded research centers — may one day overcome its shortcomings. But he is skeptical from an engineering perspective. Those systems, he said, are not made with the goals of transparency and generating rational decisions that can be explained.

“The big question is how do we design the A.I. that we want,” Dr. Ferrucci said. “To do that, I think we need to step out of the machine-learning box.”

For the full story, see:

Steve Lohr. “You Can Lead A.I. to Answers, but Can You Make It Think?” The New York Times (Monday, August 29, 2022): B1 & B7.

(Note: ellipses added.)

(Note: the online version of the story was updated Sept. 8, 2022, and has the title “One Man’s Dream of Fusing A.I. With Common Sense.”)

Leave a Reply

Your email address will not be published. Required fields are marked *