A.I. Lacks Common Sense: “A Broad and Often Unspoken Understanding of How the World Works”

(p. A15) Journalists like to punctuate stories about the risks of artificial intelligence—particularly long-term, humanity-threatening risks—with images of the Terminator. The idea is that unchecked robots will rise up and kill us all.

. . .

Melanie Mitchell, a computer scientist at Portland State University, is in the too-soon-to-worry camp. “My own opinion is that too much attention has been given to the risks from superintelligent AI,” she writes in “Artificial Intelligence,” “and far too little to deep learning’s lack of reliability and transparency and its vulnerability to attacks.”

. . .

Object-recognition software, for instance, can track pedestrians, detect tumors and sort photo libraries. But it doesn’t understand the content the way we do. Its obtuseness becomes sharply apparent in so-called adversarial attacks, in which only minimal changes to an image (or a sound or text file) can fool an AI into misidentifying it. Such attacks even transfer to the real world. A stop sign with a few innocuous stickers becomes a speed-limit sign.

The researchers first elucidating such vulnerabilities in neural networks—machine-learning programs inspired by the brain’s wiring—called them an “intriguing property.” Ms. Mitchell writes, “Calling this an ‘intriguing property’ of neural networks is a little like calling a hole in the hull of a fancy cruise liner a ‘thought-provoking facet’ of the ship.”

Ultimately, these systems lack common sense, a broad and often unspoken understanding of how the world works. Common sense, in turn, might require embodied experience in the world, plus the ability to abstract from it and form analogies. Much of Ms. Mitchell’s academic work concerns helping AI form analogies. It hasn’t progressed far. (No fault of hers.)

For the full review, see:

Matthew Hutson. “BOOKSHELF; Learn Like a Machine.” The Wall Street Journal (Wednesday, November 20, 2019): A15.

(Note: ellipses added.)

(Note: the online version of the review has the date November 19, 2019, and has the title “BOOKSHELF; ‘Human Compatible’ and ‘Artificial Intelligence’ Review: Learn Like a Machine.”)

The book under review is:

Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. New York: Farrar, Straus, and Giroux, 2019.

Leave a Reply

Your email address will not be published. Required fields are marked *