(p. B5) Yann LeCun helped give birth to today’s artificial-intelligence boom. But he thinks many experts are exaggerating its power and peril, and he wants people to know it.
. . .
On social media, in speeches and at debates, the college professor and Meta Platforms AI guru has sparred with the boosters and Cassandras who talk up generative AI’s superhuman potential, from Elon Musk to two of LeCun’s fellow pioneers, who share with him the unofficial title of “godfather” of the field. They include Geoffrey Hinton, a friend of nearly 40 years who on Tuesday was awarded a Nobel Prize in physics, and who has warned repeatedly about AI’s existential threats.
. . .
LeCun thinks AI is a powerful tool.
. . .
At the same time, he is convinced that today’s AIs aren’t, in any meaningful sense, intelligent—and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous.
If LeCun’s views are right, it spells trouble for some of today’s hottest startups, not to mention the tech giants pouring tens of billions of dollars into AI. Many of them are banking on the idea that today’s large language model-based AIs, like those from OpenAI, are on the near-term path to creating so-called “artificial general intelligence,” or AGI, that broadly exceeds human-level intelligence.
OpenAI’s Sam Altman last month said we could have AGI within “a few thousand days.” Elon Musk has said it could happen by 2026.
LeCun says such talk is likely premature. When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat,” he replied on X.
He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today’s “frontier” AIs, including those made by Meta itself.
Léon Bottou, who has known LeCun since 1986, says LeCun is “stubborn in a good way”—that is, willing to listen to others’ views, but single-minded in his pursuit of what he believes is the right approach to building artificial intelligence.
Alexander Rives, a former Ph.D. student of LeCun’s who has since founded an AI startup, says his provocations are well thought out. “He has a history of really being able to see gaps in how the field is thinking about a problem, and pointing that out,” Rives says.
. . .
The large language models, or LLMs, used for ChatGPT and other bots might someday have only a small role in systems with common sense and humanlike abilities, built using an array of other techniques and algorithms.
Today’s models are really just predicting the next word in a text, he says. But they’re so good at this that they fool us. And because of their enormous memory capacity, they can seem to be reasoning, when in fact they’re merely regurgitating information they’ve already been trained on.
“We are used to the idea that people or entities that can express themselves, or manipulate language, are smart—but that’s not true,” says LeCun. “You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating.”
For the full commentary see:
(Note: ellipses added.)
(Note: the online version of the commentary was updated Oct. 11, 2024, and has the title “Keywords: This AI Pioneer Thinks AI Is Dumber Than a Cat.” The sentence starting with “Léon Bottou” appears in the online, but not the print, version. Where there are small differences between the versions, the passages quoted above follow the online version.)