Frank Knight on the Leader of the V-Formation of Ducks

I write this on Thurs., Feb. 19, 2026. Yesterday evening, I was reading a section of Milton and Rose Friedman’s Free to Choose on the Negative Income Tax, as part of my revising a paper I have submitted to The Independent Review. As I was reading, I was surprised and numbly elated to serendipitously run across information that I had been seeking, off and on, literally for decades. Every so often I had occasion to tell a story that I was sure originated with Frank Knight. I wrote the script on Frank Knight for an audio series on Great Economists. (The current owners of the series refuse to pay me the royalties that I am owed, but that is another story.) So I thought I knew something about Knight, and own many books and articles by him. Every once in a while I spent an hour or so looking for the quotation, always failing. I even emailed Ross Emmett who many view as the current leading expert on Knight. He said he knew nothing of the quote I sought.

Buddhists who are totally at peace do not carry around with them the annoyance of unanswered questions, so if they run across an answer, it means nothing to them. Maybe this helps understand what Pasteur meant when he lectured that “chance favors only the prepared mind” (1854). The prepared mind carries around unanswered questions, unresolved contradictions, flaws in the world that could use improving. Then that mind stays alert for answers to the questions, resolutions to the contradictions, fixes for the flaws. The mind that pulls us forward is not a mind at peace.

[As an addendum, my discovery of the quote in Milton and Rose Friedman’s most famous book, after many searches in much more obscure places, reminds me of what Gertrude Himmelfarb said in a lecture at the U. of Chicago when I was a graduate student many decades ago. She searched the dusty archives long and hard, but the material most useful for her book on Harriet Taylor’s influence on Mill’s On Liberty, was hiding in plain sight in a volume written by F.A. Hayek on Mill’s correspondence with Taylor.]

Here after decades of occasional search and constant alertness, is the testimony of Milton and Rose, two former students of Frank Knight, showing that my memory of the Frank Knight duck V-formation story was not a dream or hallucination:

Our great and revered teacher Frank H. Knight was fond of illustrating different forms of leadership with ducks that fly in a V with a leader in front. Every now and then, he would say, the ducks behind the leader would veer off in a different direction while the leader continued flying ahead. When the leader looked around and saw that no one was following, he would rush to get in front of the V again. That is one form of leadership—undoubtedly the most prevalent form in Washington.

Source of Milton and Rose quote is:

Steve Lohr. “A.I. Is Poised to Put Midsize Cities on the Map.” The New York Times (Mon., December 30, 2025): B1-B2.

The Himmelfarb book mentioned in my initial comments is:

Himmelfarb, Gertrude. On Liberty and Liberalism: The Case of John Stuart Mill. New York: Knopf, 1974.

The Hayek book mentioned in my initial comments is:

Hayek, F.A. John Stuart Mill and Harriet Taylor: Their Friendship and Subsequent Marriage. London: Routledge & Kegan Paul, 1951. [Some citations to the book have the word “Correspondence” substituted for “Friendship.”]


Entrepreneurs Make Leaps: A Critique of the Theory of the Adjacent Possible (TAP)

In my Openness book, I argue that the innovative entrepreneur is a key agent of the innovative dynamism that brings us the new goods and the process innovations through which we flourish. The Theory of the Adjacent Possible, devised by Stuart Kauffman, Roger Koppl, and collaborators, and popularized by Steven Johnson, aims to “deflate” the innovative entrepreneur, and argues that technological progress is an inevitable result of a stochastic process. I have written an extended critique of the TAP, and have posted the latest version to the SSRN working paper archive. In some ways the working paper, especially the last half, can be viewed as further elaboration and illustration of some of the points made in Openness.

The citation for, and link to, my working paper is:

Diamond, Arthur M. “Entrepreneurs Make Leaps: A Critique of the Theory of the Adjacent Possible.” (Written Jan. 26, 2026; Posted Feb. 18, 2026). Available at SSRN: https://ssrn.com/abstract=6166326

My book mentioned in my initial comments is:

Diamond, Arthur M., Jr. Openness to Creative Destruction: Sustaining Innovative Dynamism. New York: Oxford University Press, 2019.


Large Randomized Controlled Trial Finds Little Benefit in Free Money to Poor, Undermining Case for Universal Basic Income (UBI)

A variety of arguments have been made in support of a Universal Basic Income (UBI). I am most interested in the argument that says that technology will destroy the jobs of the worst off, and so for them to survive society would be justified in giving them a basic income. I do not believe that in a free society technological progress will on balance destroy the jobs of the worst off. If innovative entrepreneurs are free to innovate, especially in labor markets, they will find ways to employ the worst off.

Others have argued that giving a basic income to the worst off will make them better parents, measurable by better child outcomes in terms of language skills and better behavior and cognition. Several years ago these advocates setup a big, expensive randomized controlled trial to test their argument. The results? None of their hypotheses were supported. The passages quoted below are from a front page New York Times article in which they express their surprise, and for some, their incredulity.

(p. A1) If the government wants poor children to thrive, it should give their parents money. That simple idea has propelled an avid movement to send low-income families regular payments with no strings attached.

Significant but indirect evidence has suggested that unconditional cash aid would help children flourish. But now a rigorous experiment, in a more direct test, found that years of monthly payments did nothing to boost children’s well-being, a result that defied researchers’ predictions and could weaken the case for income guarantees.

After four years of payments, children whose parents received $333 a month from the experiment fared no better than similar children without that help, the study found. They were no more likely to develop language skills, avoid behavioral problems or developmental delays, demonstrate executive function or exhibit brain activity associated with cognitive development.

“I was very surprised — we were all very surprised,” said Greg J. Duncan, an economist at the University of California, Irvine and one of six researchers who led the study, called Baby’s First Years. “The money did not (p. A15) make a difference.”

The findings could weaken the case for turning the child tax credit into an income guarantee, as the Democrats did briefly four years ago in a pandemic-era effort to fight child poverty.

. . .

Though an earlier paper showed promising activity on a related neurological measure in the high-cash infants, that trend did not endure. The new study detected “some evidence” of other differences in neurological activity between the two groups of children, but its significance was unclear.

While researchers publicized the earlier, more promising results, the follow-up study was released quietly and has received little attention. Several co-authors declined to comment on the results, saying that it was unclear why the payments had no effect and that the pattern could change as the children age.

For the full story see:

Jason DeParle. “Cash Stipends Did Not Benefit Needy Children.” The New York Times (Weds., July 30, 2025): A1 & A15.

(Note: ellipsis added.)

(Note: the online version of the story has the date July 28, 2025, and has the title “Study May Undercut Idea That Cash Payments to Poor Families Help Child Development.”)

The academic presentation of the research discussed above, can be found in:

Noble, Kimberly, Greg Duncan, Katherine Magnuson, Lisa A. Gennetian, Hirokazu Yoshikawa, Nathan A. Fox, Sarah Halpern-Meekin, Sonya Troller-Renfree, Sangdo Han, Shannon Egan-Dailey, Timothy D. Nelson, Jennifer Mize Nelson, Sarah Black, Michael Georgieff, and Debra Karhson. “The Effect of a Monthly Unconditional Cash Transfer on Children’s Development at Four Years of Age: A Randomized Controlled Trial in the U.S.” National Bureau of Economic Research (NBER) Working Paper 33844, May 2025.

AI Cannot Know What People Think “At the Very Edge of Their Experience”

The passages quoted below mention “the advent of generative A.I.” From previous reading, I had the impression that “generative A.I” meant A.I. that had reached human level cognition. But when I looked up the meaning of the phrase, I found that it means A.I. that can generate new content. Then I smiled. I was at Wabash College as an undergraduate from 1971-1974 (I graduated in three years). Sometime during those years, Wabash acquired its first minicomputer, and I took a course in BASIC computer programming. I distinctly remember programming a template for a brief poem where at key locations I inserted a random word variable. Where the random word variable occurred, the program randomly selected from one of a number of rhyming words. So each time the program was run, a new rhyming poem would be “generated.” That was new content, and sometimes it was even amusing. But it wasn’t any good, and it did not have deep meaning, and if what it generated was true, it was only by accident. So I guess “the advent of generative A.I.” goes back at least to the early 1970s when Art Diamond messed around with a DEC.

This is not the main point of the passages quoted below. The main point is that the frontiers of human thought are not on the internet, and so cannot be part of the training of A.I. So whatever A.I. can do, it can’t think at the human “edge.”

(p. B3) Dan Shipper, the founder of the media start-up Every, says he gets asked a lot whether he thinks robots will replace writers. He swears they won’t, at least not at his company.

. . .

Mr. Shipper argues that the advent of generative A.I. is merely the latest step in a centuries-long technological march that has brought writers closer to their own ideas. Along the way, most typesetters and scriveners have been erased. But the part of writing that most requires humans remains intact: a perspective and taste, and A.I. can help form both even though it doesn’t have either on its own, he said.

“One example of a thing that journalists do that language models cannot is come and have this conversation with me,” Mr. Shipper said. “You’re going out and talking to people every day at the very edge of their experience. That’s always changing. And language models just don’t have access to that, because it’s not on the internet.”

For the full story see:

Benjamin Mullin. “Will Writing Survive A.I.? A Start-Up Is Betting on It.” The New York Times (Mon., May 26, 2025): B3.

(Note: ellipsis added.)

(Note: the online version of the story has the date May 21, 2025, and has the title “Will Writing Survive A.I.? This Media Company Is Betting on It.”)

If AI Takes Some Jobs, New Human Jobs Will Be Created

In the passage quoted below, Atkinson makes a sound general case for optimism on the effects of AI on the labor market. I would add to that case that many are currently overestimating the potential cognitive effectiveness of AI. Humans have a vast reservoir of unarticulated common sense knowledge that is not accessible to AI training. In addition AI cannot innovate at the frontiers of knowledge, not yet posted to the internet.

(p. A15) AI doomsayers frequently succumb to what economists call the “lump of labor” fallacy: the idea that there is a limited amount of work to be done, and if a job is eliminated, it’s gone for good. This fails to account for second-order effects, whereby the saving from increased productivity is recycled back into the economy in the form of higher wages, higher profits and reduced prices. This creates new demand that in turn creates new jobs. Some of these are entirely new occupations, such as “content creator assistant,” but others are existing jobs that are in higher demand now that people have more money to spend—for example, personal trainers.

Suppose an insurance firm uses AI to handle many of the customer-service functions that humans used to perform. Assume the technology allows the firm to do the same amount of work with 50% less labor. Some workers would lose their jobs, but lower labor costs would decrease insurance premiums. Customers would then be able to spend less money on insurance and more on other things, such as vacations, restaurants or gym memberships.

In other words, the savings don’t get stuffed under a mattress; they get spent, thereby creating more jobs.

For the full commentary, see:

Robert D. Atkinson. “No, AI Robots Won’t Take All Our Jobs.” The Wall Street Journal (Fri., June 6, 2025): A15.

(Note: the online version of the commentary has the date June 5, 2025, and has the same title as the print version.)

We Need to “Tolerate Heterodox Smart People” if We Want to Achieve Big Things

Peter Thiel is often quoted as having said many years ago that “We wanted flying cars, instead we got 140 characters” (as quoted in Lewis-Kraus 2024), a reference to the original limit to the length of a tweet on Twitter. The quotations below are all from the more recent Peter Thiel, who was having a conversation with NYT columnist Ross Douthat. He still believes that we are not boldly pursuing big goals, the only exception being A.I. Is the constraint that big goals are impossible to achieve, or do we lack people smart enough or motivated enough to pursue them, or do we regulate motivated smart people into discouraged despair?

(p. 9) One question we can frame is: Just how big a thing do I think A.I. is? And my stupid answer is: It’s more than a nothing burger, and it’s less than the total transformation of our society. My place holder is that it’s roughly on the scale of the internet in the late ’90s. I’m not sure it’s enough to really end the stagnation. It might be enough to create some great companies. And the internet added maybe a few percentage points to the G.D.P., maybe 1 percent to G.D.P. growth every year for 10, 15 years. It added some to productivity. So that’s roughly my place holder for A.I.

It’s the only thing we have. It’s a little bit unhealthy that it’s so unbalanced. This is the only thing we have. I’d like to have more multidimensional progress. I’d like us to be going to Mars. I’d like us to be having cures for dementia. If all we have is A.I., I will take it.

. . .

And so maybe the problems are unsolvable, which is the pessimistic view. Maybe there is no cure for dementia at all, and it’s a deeply unsolvable problem. There’s no cure for mortality. Maybe it’s an unsolvable problem.

Or maybe it’s these cultural things. So it’s not the individually smart person, but it’s how this fits into our society. Do we tolerate heterodox smart people? Maybe you need heterodox smart people to do crazy experiments.

. . .

I had a conversation with Elon a few weeks ago about this. He said we’re going to have a billion humanoid robots in the U.S. in 10 years. And I said: Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth, the growth will take care of this. And then — well, he’s still worried about the budget deficits. This doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it. But yeah, there’s some way in which these things are not quite thought through.

For the full interview, see:

Douthat, Ross. “Are We Dreaming Big Enough?” The New York Times, SundayOpinion Section (Sunday, June 29, 2025): 9.

(Note: ellipses added.)

(Note: the online version of the interview has the date June 26, 2025, and has the title “Peter Thiel and the Antichrist.”)

Peter Thiel’s yearning many years ago for flying cars was quoted more recently in:

Lewis-Kraus, Gideon. “Flight of Fancy.” The New Yorker, April 22, 2024, 28-39.

Lucian L. Leape Was Willing to Take the Ill-Will

In an earlier entry I presented Charlie Munger’s story where a hospital administrator had to be willing to absorb the ill-will, if he was to take the actions necessary to fix a badly malfunctioning department of the hospital. Another person willing to absorb the ill-will in order to reform medicine was Lucian L. Leape whose story is sketched in the passages quoted below.

(p. B21) Lucian L. Leape, a surgeon whose insights into medical mistakes in the 1990s gave rise to the field of patient safety, rankling much of the health care establishment in the process, died on Monday at his home in Lexington, Mass. He was 94.

. . .

In 1986, at age 56, Dr. Leape grew interested in health policy and spent a year at the RAND Corporation on a midcareer fellowship studying epidemiology, statistics and health policy.

Following his stint at RAND, he joined the team at Harvard conducting the Medical Practice Study. When Dr. Howard Hiatt, then the dean of the Harvard School of Public Health (now the Harvard T.H. Chan School of Public Health), offered Dr. Leape the opportunity to work on the study, “I accepted,” Dr. Leape wrote in his 2021 book, “Making Healthcare Safe: The Story of the Patient Safety Movement,” “not suspecting it would change my life.”

The most significant finding, Dr. Leape said in the 2015 interview, was that two-thirds of the injuries to patients were caused by errors that appeared to be preventable. “The implications were profound,” he said.

In 1994, Dr. Leape submitted a paper to The New England Journal of Medicine, laying out the extent to which preventable medical injury occurred and arguing for a shift of focus away from individuals and toward systems. But the paper was rejected. “I was told it didn’t meet their standards,” he recalled.

Dr. Leape sent the paper out again, this time to The Journal of the American Medical Association. Dr. George Lundberg, then the editor of JAMA, immediately recognized the importance of the topic, Dr. Leape said. “But he also knew it could offend many doctors. We didn’t talk about mistakes.”

Dr. Donald M. Berwick, president emeritus at the Institute for Healthcare Improvement in Boston and a longtime colleague of Dr. Leape’s, agreed. “To talk about error in medicine back then was considered rude,” he said in an interview in 2020. “Errors were what we call normalized. Bad things happen, and that’s just the way it is.”

“But then you had Lucian,” he added, “this quite different voice in the room saying, ‘No, this isn’t normal. And we can do something about it.’”

Dr. Leape’s paper, “Error in Medicine,” was the first major article on the topic in the general medical literature. The timing of publication, just before Christmas in 1994, Dr. Leape wrote in his 2021 book, was intentional. Dr. Lundberg knew it would receive little attention and therefore wouldn’t upset colleagues.

On Dec. 3, 1994, however, three weeks before the JAMA piece appeared, Betsy Lehman, a 39-year-old health care reporter for The Boston Globe, died after mistakenly receiving a fatal overdose of chemotherapy at the Dana-Farber Cancer Institute in Boston.

“Betsy’s death was a watershed event,” Dr. Leape said in a 2020 interview for a short documentary about Ms. Lehman.

The case drew national attention. An investigation into the death revealed that it wasn’t caused by one individual clinician, but by a series of errors involving multiple physicians and nurses who had misinterpreted a four-day regimen as a single dose, administering quadruple the prescribed amount.

The case made Dr. Leape’s point with tragic clarity: Ms. Lehman’s death, like so many others, resulted from a system that lacked sufficient safeguards to prevent the error.

. . .

Dr. Gawande said he believed it was the confidence Dr. Leape had acquired as a surgeon that girded him in the face of strong resistance from medical colleagues.

“He had enough arrogance to believe in himself and in what he was saying,” Dr. Gawande said. “He knew he was onto something important, and that he could bring the profession along, partly by goading the profession as much as anything.”

For the full obituary, see:

Katie Hafner. “Lucian L. Leape, 94, Who Put Patient Safety at Forefront, Is Dead.” The New York Times (Thursday, July 3, 2025): B21.

(Note: ellipses added.)

(Note: the online version of the obituary has the date July 1, 2025, and has the title “Lucian Leape, Whose Work Spurred Patient Safety in Medicine, Dies at 94.”)

Dr. Leape’s history of his efforts to increase healthcare safety can be found in:

Leape, Lucian L. Making Healthcare Safe: The Story of the Patient Safety Movement. Cham, Switzerland: Springer, 2021.

A.I. Only “Knows” What Has Been Published or Posted

A.I. “learns” by scouring language that has been published or posted. If outdated or never-true “facts” are posted on the web, A.I. may regurgitate them. It takes human eyes to check whether there really is a picnic table in a park.

(p. B1) Last week, I asked Google to help me plan my daughter’s birthday party by finding a park in Oakland, Calif., with picnic tables. The site generated a list of parks nearby, so I went to scout two of them out — only to find there were, in fact, no tables.

“I was just there,” I typed to Google. “I didn’t see wooden tables.”

Google acknowledged the mistake and produced another list, which again included one of the parks with no tables.

I repeated this experiment by asking Google to find an affordable carwash nearby. Google listed a service for $25, but when I arrived, a carwash cost $65.

I also asked Google to find a grocery store where I could buy an exotic pepper paste. Its list included a nearby Whole Foods, which didn’t carry the item.

For the full commentary see:

Brian X. Chen. “Underneath a New Way to Search, A Web of Wins and Imperfections.” The New York Times (Tues., June 3, 2025): B1 & B4.

(Note: the online version of the commentary has the date May 29, 2024, and has the title “Google Introduced a New Way to Use Search. Proceed With Caution.”)

How Did Ed Smylie and His Team Create the Kludge That Saved the Crew of Apollo 13?

Gary Klein in Seeing What Others Don’t analyzed cases of innovation, and sought their sources. One source he came up with was necessity. His compelling example was the firefighter Wag Dodge who, with maybe 60 seconds until he would be engulfed in flame, lit a match to the grass around him, and then laid down in the still-hot embers. The roaring fire bypassed the patch he pre-burned, and his life was saved. The story is well-told in Norman Maclean’s Young Men and Fire.

Pondering more cases of necessity might be useful to help us understand, and encourage, future innovation. One candidate might be the kludge that Ed Smylie and his engineers put together to save the Apollo 13 crew from suffocating after an explosion blew up their command capsule oxygen tank.

Necessity may be part of it, but cannot be the whole story. Humanity needed to fly for thousands of years, but it took Wilbur Wright to make it happen. (This point is made in Kevin Ashton’s fine and fun How to Fly a Horse.)

I have ordered the book co-authored by Lovell, and mentioned in a passage quoted below, in case it contains insight on how the Apollo 13 kludge was devised.

(p. B11) Ed Smylie, the NASA official who led a team of engineers that cobbled together an apparatus made of cardboard, plastic bags and duct tape that saved the Apollo 13 crew in 1970 after an explosion crippled the spacecraft as it sped toward the moon, died on April 21 [2025] in Crossville, Tenn. He was 95.

. . .

Soft-spoken, with an accent that revealed his Mississippi upbringing, Mr. Smylie was relaxing at home in Houston on the evening of April 13 when Mr. Lovell radioed mission control with his famous (and frequently misquoted) line: “Uh, Houston, we’ve had a problem.”

An oxygen tank had exploded, crippling the spacecraft’s command module.

Mr. Smylie, . . ., saw the news on television and called the crew systems office, according to the 1994 book “Lost Moon,” by Mr. Lovell and the journalist Jeffrey Kluger. The desk operator said the astronauts were retreating to the lunar excursion module, which was supposed to shuttle two crew members to the moon.

“I’m coming in,” Mr. Smylie said.

Mr. Smylie knew there was a problem with this plan: The lunar module was equipped to safely handle air flow for only two astronauts. Three humans would generate lethal levels of carbon dioxide.

To survive, the astronauts would somehow need to refresh the canisters of lithium hydroxide that would absorb the poisonous gases in the lunar excursion module. There were extra canisters in the command module, but they were square; the lunar module ones were round.

“You can’t put a square peg in a round hole, and that’s what we had,” Mr. Smylie said in the documentary “XIII” (2021).

He and about 60 other engineers had less than two days to invent a solution using materials already onboard the spacecraft.

. . .

In reality, the engineers printed a supply list of the equipment that was onboard. Their ingenious solution: an adapter made of two lithium hydroxide canisters from the command module, plastic bags used for garments, cardboard from the cover of the flight plan, a spacesuit hose and a roll of gray duct tape.

“If you’re a Southern boy, if it moves and it’s not supposed to, you use duct tape,” Mr. Smylie said in the documentary. “That’s where we were. We had duct tape, and we had to tape it in a way that we could hook the environmental control system hose to the command module canister.”

Mission control commanders provided step-by-step instructions to the astronauts for locating materials and building the adapter.

. . .

The adapter worked. The astronauts were able to breathe safely in the lunar module for two days as they awaited the appropriate trajectory to fly the hobbled command module home.

. . .

Mr. Smylie always played down his ingenuity and his role in saving the Apollo 13 crew.

“It was pretty straightforward, even though we got a lot of publicity for it and Nixon even mentioned our names,” he said in the oral history. “I said a mechanical engineering sophomore in college could have come up with it.”

For the full obituary, see:

Michael S. Rosenwald. “Ed Smylie Dies at 95; His Team of Engineers Saved Apollo 13 Crew.” The New York Times (Tuesday, May 20, 2025): B11.

(Note: ellipses, and bracketed year, added.)

(Note: the online version of the obituary was updated May 18, 2025, and has the title “Ed Smylie, Who Saved the Apollo 13 Crew With Duct Tape, Dies at 95.”)

Klein’s book that I praise in my introductory comments is:

Klein, Gary A. Seeing What Others Don’t: The Remarkable Ways We Gain Insights. Philadelphia, PA: PublicAffairs, 2013.

Maclean’s book that I praise in my introductory comments is:

Maclean, Norman. Young Men and Fire. new ed., Chicago: University of Chicago Press, 2017.

Ashton’s book that I praise in my introductory comments is:

Ashton, Kevin. How to Fly a Horse: The Secret History of Creation, Invention, and Discovery. New York: Doubleday, 2015.

The book co-authored by Lovell and mentioned above is:

Lovell, Jim, and Jeffrey Kluger. Lost Moon: The Perilous Voyage of Apollo 13. Boston, MA: Houghton Mifflin, 1994.

The Newest A.I. “Reasoning Models Actually Hallucinate More Than Their Predecessors”

I attended an I.H.S. Symposium last week where one of my minor discoveries was that a wide range of intellectuals, regardless of location on the political spectrum, share a concern for the allegedly damaging labor market effects of A.I.  As in much else I am an outlier–I am not concerned about A.I.

But since so many are concerned, and believe A.I. undermines my case for a better labor market under innovative dynamism, I will continue to occasionally highlight articles that present the evidence and arguments that reassure me.

(p. B1) “Humanity is close to building digital superintelligence,” Altman declared in an essay this week, and this will lead to “whole classes of jobs going away” as well as “a new social contract.” Both will be consequences of AI-powered chatbots taking over all our white-collar jobs, while AI-powered robots assume the physical ones.

Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren’t buying all that talk.

The title of a fresh paper from Apple says it all: “The Illusion of Thinking.” In it, a half-dozen top researchers probed reasoning models—large language models that “think” about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim.

. . .

(p. B4) Apple’s researchers found “fundamental limitations” in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered “complete accuracy collapse.” Similarly, engineers at Salesforce AI Research concluded that their results “underscore a significant gap between current LLM capabilities and real-world enterprise demands.”

Importantly, the problems these state-of-the-art AIs couldn’t handle are logic puzzles that even a precocious child could solve, with a little instruction. What’s more, when you give these AIs that same kind of instruction, they can’t follow it.

. . .

Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple’s paper, along with related work, exposes flaws in today’s reasoning models, suggesting they’re not the dawn of human-level ability but rather a dead end. “Part of the reason the Apple study landed so strongly is that Apple did it,” he says. “And I think they did it at a moment in time when people have finally started to understand this for themselves.”

In areas other than coding and mathematics, the latest models aren’t getting better at the rate that they once did. And the newest reasoning models actually hallucinate more than their predecessors.

For the full commentary see:

Christopher Mims. “Keywords: Apple Calls Today’s AI ‘The Illusion of Thinking’.” The Wall Street Journal (Sat., June 14, 2025): B1 & B4.

(Note: ellipses added.)

(Note: the online version of the commentary has the date June 13, 2025, and has the title “Keywords: Why Superintelligent AI Isn’t Taking Over Anytime Soon.” In the original print and online versions, the word “more” appears in italics for emphasis.)

Sam Altman’s blog essay mentioned above is:

Altman, Sam. “The Gentle Singularity.” In Sam Altman blog, June 10, 2025, URL: https://blog.samaltman.com/the-gentle-singularity.

The Apple research article briefly summarized in a passage quoted above is:

Shojaee, Parshin, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, Mehrdad Farajtabar. “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models Via the Lens of Problem Complexity.” Apple Machine Learning Research, June 2025, URL: https://machinelearning.apple.com/research/illusion-of-thinking.

Trump Is a Change Agent Because He Can Take the Ill Will of the Stagnationists

In an earlier blog entry I pondered Charlie Munger’s sage analysis that agents of change must often be willing to take the “ill will” aimed at them from the stagnationists who benefit from stasis. The stagnationists may be corrupt, or incompetent, or simply lack the imagination or the energy to do something in a better way.

Agents of change are scarce because most of us care a lot about what other people think of us. We experience psychic stress if we are systematically stigmatized, or even just ignored. Donald Trump may be capable of making major changes because, through temperament or resolve, he has found a way to shut out the psychic stress; a way to take the ill will.

Kessler’s commentary, quoted below, was published early in the pandemic, on Feb. 10, 2020. At that point Kessler believed that Trump’s success with the economy would get Trump re-elected. But as the months of 2020 rolled on, the pandemic increasingly hurt Trump’s prospects; hence the source of the pandemic still matters a lot, and whether the vaccine was intentionally delayed a few weeks, to release it just after the election, also still matters a lot.

A key question is whether Trump still has the core “agenda of tax cuts, deregulation and originalist judges” that Kessler believed was the Trump core agenda in 2016.

I hope yes, but fear no. In 2025 are tariffs and industrial policy part of the “distraction” (aka “MacGuffin”) Kessler posits, or are they part of Trump’s core agenda?

(p. A15) Is he a disease or a cure? Like him or hate him, there’s tons of spilled ink trying to assess President Trump’s governing style. To me, the key to understanding Trumpism is remembering why he was elected.

What do I mean? Voters chose Donald Trump as an antidote to the growing inflammation caused by the (OK, deep breath . . .) prosperity-crushing, speech-inhibiting, nanny state-building, carbon-obsessing, patriarchy-bashing, implicit bias-accusing, tokey-wokey, globalist, swamp-creature governing class—all perfectly embodied by the Democrats’ 2016 nominee. On taking office, Mr. Trump proceeded to hire smart people and create a massive diversion (tweets, border walls, tariffs) as a smokescreen to let them implement an agenda of tax cuts, deregulation and originalist judges.

Those reforms have left the market free to do its magic and got the economy grooving like it’s 1999. The daily Trump hurricane—like the commotion over the Chiefs from Kansas—makes the media focus on the all-powerful wizard while ignoring the policy makers behind the curtain.

Alfred Hitchcock called this kind of distraction a “MacGuffin”—something that moves the plot along and provides motivation for the characters, but is itself unimportant, insignificant or irrelevant. It can be a kind of sleight of hand, a distraction, and Mr. Trump uses his own public persona as a MacGuffin in precisely that way. The mobs decked in “Resist” jewelry fall for it every time.

For example, Sen. Bernie Sanders used his remarks during the Senate impeachment trial to point out that the media had documented some 16,200 alleged lies by President Trump. The MacGuffin worked! Mr. Sanders and his peers are focused on the president’s words, while most voters see the real plot unfolding in America—millions of jobs and rising wages.

The president’s success comes from his ability to shrug off critics.  . . .  Rather than cower at the criticism he faces from the mobs, he probably smirks and thinks to himself, “Yeah, I don’t believe in that” and tweets away.

That’s the only reaction that can withstand today’s far left, which has become increasingly self-righteous.

For the full commentary see:

Andy Kessler. “President Donald J. MacGuffin.” The Wall Street Journal (Monday, February 10, 2020 [sic]): A17.

(Note: ellipses added.)

(Note: the online version of the commentary has the date February 9, 2020 [sic], and has the same title as the print version.)