AI Algorithms Use Massive Data to Do “Narrow Tasks”

(p. B2) A funny thing happens among engineers and researchers who build artificial intelligence once they attain a deep level of expertise in their field. Some of them—especially those who understand what actual, biological intelligences are capable of—conclude that there’s nothing “intelligent” about AI at all.

. . .

. . . the muddle that the term AI creates fuels a tech-industry drive to claim that every system involving the least bit of machine learning qualifies as AI, and is therefore potentially revolutionary. Calling these piles of complicated math with narrow and limited utility “intelligent” also contributes to wild claims that our “AI” will soon reach human-level intelligence. These claims can spur big rounds of investment and mislead the public and policy makers who must decide how to prepare national economies for new innovations.

. . .

The tendency for CEOs and researchers alike to say that their system “understands” a given input—whether it’s gigabytes of text, images or audio—or that it can “think” about those inputs, or that it has any intention at all, are examples of what Drew McDermott, a computer scientist at Yale, once called “wishful mnemonics.” That he coined this phrase in 1976 makes it no less applicable to the present day.

“I think AI is somewhat of a misnomer,” says Daron Acemoglu, an economist at Massachusetts Institute of Technology whose research on AI’s economic impacts requires a precise definition of the term. What we now call AI doesn’t fulfill the early dreams of the field’s founders—either to create a system that can reason as a person does, or to create tools that can augment our abilities. “Instead, it uses massive amounts of data to turn very, very narrow tasks into prediction problems,” he says.

When AI researchers say that their algorithms are good at “narrow” tasks, what they mean is that, with enough data, it’s possible to “train” their algorithms to, say, identify a cat. But unlike a human toddler, these algorithms tend not to be very adaptable. For example, if they haven’t seen cats in unusual circumstances—say, swimming—they might not be able to identify them in that context. And training an algorithm to identify cats generally doesn’t also increase its ability to identify any other kind of animal or object. Identifying dogs means more or less starting from scratch.

For the full commentary, see:

Christopher Mims. “AI’s Big Chill.” The Wall Street Journal (Sat., July 31, 2021): B2.

(Note: ellipses added.)

(Note: the online version of the commentary has the date July 30, 2021, and has the title “Artificial Intelligence’s Big Chill.” When you click on the title in the search list internal to the WSJ, you get a different title on the page of the article itself: “Why Artificial Intelligence Isn’t Intelligent.”)

Leave a Reply

Your email address will not be published. Required fields are marked *