I attended an I.H.S. Symposium last week where one minor discovery was that a wide range of intellectuals, regardless of location on the political spectrum, share a concern for allegedly damaging labor market effects of A.I. As in much else I am an outlier–I am not concerned about A.I.
But since so many are concerned, and believe A.I. undermines my case for a better labor market under innovative dynamism, I will continue to occasionally highlight articles that present the evidence and argument that reassure me.
(p. B1) “Humanity is close to building digital superintelligence,” Altman declared in an essay this week, and this will lead to “whole classes of jobs going away” as well as “a new social contract.” Both will be consequences of AI-powered chatbots taking over all our white-collar jobs, while AI-powered robots assume the physical ones.
Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren’t buying all that talk.
The title of a fresh paper from Apple says it all: “The Illusion of Thinking.” In it, a half-dozen top researchers probed reasoning models—large language models that “think” about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim.
. . .
(p. B4) Apple’s researchers found “fundamental limitations” in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered “complete accuracy collapse.” Similarly, engineers at Salesforce AI Research concluded that their results “underscore a significant gap between current LLM capabilities and real-world enterprise demands.”
Importantly, the problems these state-of-the-art AIs couldn’t handle are logic puzzles that even a precocious child could solve, with a little instruction. What’s more, when you give these AIs that same kind of instruction, they can’t follow it.
. . .
Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple’s paper, along with related work, exposes flaws in today’s reasoning models, suggesting they’re not the dawn of human-level ability but rather a dead end. “Part of the reason the Apple study landed so strongly is that Apple did it,” he says. “And I think they did it at a moment in time when people have finally started to understand this for themselves.”
In areas other than coding and mathematics, the latest models aren’t getting better at the rate that they once did. And the newest reasoning models actually hallucinate more than their predecessors.
For the full commentary see:
(Note: ellipses added.)
(Note: the online version of the commentary has the date June 13, 2025, and has the title “Keywords: Why Superintelligent AI Isn’t Taking Over Anytime Soon.” In the original print and online versions, the word “more” appears in italics for emphasis.)
Sam Altman’s blog essay mentioned above is:
The Apple research article briefly summarized in a passage quoted above is: