A.I. can be a useful tool for searching and summarizing the current state of consensus knowledge. But I am highly dubious that it will ever be able to make the breakthrough leaps that some humans are sometimes able to make. And I am somewhat dubious that it will ever be able to make the resilient pivots that all of us must sometimes make in the face of new and unexpected challenges.
(p. B2) In a series of recent essays, [Melanie] Mitchell argued that a growing body of work shows that it seems possible models develop gigantic “bags of heuristics,” rather than create more efficient mental models of situations and then reasoning through the tasks at hand. (“Heuristic” is a fancy word for a problem-solving shortcut.)
When Keyon Vafa, an AI researcher at Harvard University, first heard the “bag of heuristics” theory, “I feel like it unlocked something for me,” he says. “This is exactly the thing that we’re trying to describe.”
Vafa’s own research was an effort to see what kind of mental map an AI builds when it’s trained on millions of turn-by-turn directions like what you would see on Google Maps. Vafa and his colleagues used as source material Manhattan’s dense network of streets and avenues.
The result did not look anything like a street map of Manhattan. Close inspection revealed the AI had inferred all kinds of impossible maneuvers—routes that leapt over Central Park, or traveled diagonally for many blocks. Yet the resulting model managed to give usable turn-by-turn directions between any two points in the borough with 99% accuracy.
Even though its topsy-turvy map would drive any motorist mad, the model had essentially learned separate rules for navigating in a multitude of situations, from every possible starting point, Vafa says.
The vast “brains” of AIs, paired with unprecedented processing power, allow them to learn how to solve problems in a messy way which would be impossible for a person.
. . .
. . ., today’s AIs are overly complicated, patched-together Rube Goldberg machines full of ad-hoc solutions for answering our prompts. Understanding that these systems are long lists of cobbled-together rules of thumb could go a long way to explaining why they struggle when they’re asked to do things even a little bit outside their training, says Vafa. When his team blocked just 1% of the virtual Manhattan’s roads, forcing the AI to navigate around detours, its performance plummeted.
This illustrates a big difference between today’s AIs and people, he adds. A person might not be able to recite turn-by-turn directions around New York City with 99% accuracy, but they’d be mentally flexible enough to avoid a bit of roadwork.
For the full commentary see:
(Note: ellipses added.)
(Note: the online version of the commentary has the date April 25, 2025, and has the title “We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All.”)
A conference draft of the paper that Vafa co-authored on A.I.’s mental map of Manhattan is: