AI Cannot Know What People Think “At the Very Edge of Their Experience”

The passages quoted below mention “the advent of generative A.I.” From previous reading, I had the impression that “generative A.I” meant A.I. that had reached human level cognition. But when I looked up the meaning of the phrase, I found that it means A.I. that can generate new content. Then I smiled. I was at Wabash College as an undergraduate from 1971-1974 (I graduated in three years). Sometime during those years, Wabash acquired its first minicomputer, and I took a course in BASIC computer programming. I distinctly remember programming a template for a brief poem where at key locations I inserted a random word variable. Where the random word variable occurred, the program randomly selected from one of a number of rhyming words. So each time the program was run, a new rhyming poem would be “generated.” That was new content, and sometimes it was even amusing. But it wasn’t any good, and it did not have deep meaning, and if what it generated was true, it was only by accident. So I guess “the advent of generative A.I.” goes back at least to the early 1970s when Art Diamond messed around with a DEC.

This is not the main point of the passages quoted below. The main point is that the frontiers of human thought are not on the internet, and so cannot be part of the training of A.I. So whatever A.I. can do, it can’t think at the human “edge.”

(p. B3) Dan Shipper, the founder of the media start-up Every, says he gets asked a lot whether he thinks robots will replace writers. He swears they won’t, at least not at his company.

. . .

Mr. Shipper argues that the advent of generative A.I. is merely the latest step in a centuries-long technological march that has brought writers closer to their own ideas. Along the way, most typesetters and scriveners have been erased. But the part of writing that most requires humans remains intact: a perspective and taste, and A.I. can help form both even though it doesn’t have either on its own, he said.

“One example of a thing that journalists do that language models cannot is come and have this conversation with me,” Mr. Shipper said. “You’re going out and talking to people every day at the very edge of their experience. That’s always changing. And language models just don’t have access to that, because it’s not on the internet.”

For the full story see:

Benjamin Mullin. “Will Writing Survive A.I.? A Start-Up Is Betting on It.” The New York Times (Mon., May 26, 2025): B3.

(Note: ellipsis added.)

(Note: the online version of the story has the date May 21, 2025, and has the title “Will Writing Survive A.I.? This Media Company Is Betting on It.”)

If AI Takes Some Jobs, New Human Jobs Will Be Created

In the passage quoted below, Atkinson makes a sound general case for optimism on the effects of AI on the labor market. I would add to that case that many are currently overestimating the potential cognitive effectiveness of AI. Humans have a vast reservoir of unarticulated common sense knowledge that is not accessible to AI training. In addition AI cannot innovate at the frontiers of knowledge, not yet posted to the internet.

(p. A15) AI doomsayers frequently succumb to what economists call the “lump of labor” fallacy: the idea that there is a limited amount of work to be done, and if a job is eliminated, it’s gone for good. This fails to account for second-order effects, whereby the saving from increased productivity is recycled back into the economy in the form of higher wages, higher profits and reduced prices. This creates new demand that in turn creates new jobs. Some of these are entirely new occupations, such as “content creator assistant,” but others are existing jobs that are in higher demand now that people have more money to spend—for example, personal trainers.

Suppose an insurance firm uses AI to handle many of the customer-service functions that humans used to perform. Assume the technology allows the firm to do the same amount of work with 50% less labor. Some workers would lose their jobs, but lower labor costs would decrease insurance premiums. Customers would then be able to spend less money on insurance and more on other things, such as vacations, restaurants or gym memberships.

In other words, the savings don’t get stuffed under a mattress; they get spent, thereby creating more jobs.

For the full commentary, see:

Robert D. Atkinson. “No, AI Robots Won’t Take All Our Jobs.” The Wall Street Journal (Fri., June 6, 2025): A15.

(Note: the online version of the commentary has the date June 5, 2025, and has the same title as the print version.)

We Need to “Tolerate Heterodox Smart People” if We Want to Achieve Big Things

Peter Thiel is often quoted as having said many years ago that “We wanted flying cars, instead we got 140 characters” (as quoted in Lewis-Kraus 2024), a reference to the original limit to the length of a tweet on Twitter. The quotations below are all from the more recent Peter Thiel, who was having a conversation with NYT columnist Ross Douthat. He still believes that we are not boldly pursuing big goals, the only exception being A.I. Is the constraint that big goals are impossible to achieve, or do we lack people smart enough or motivated enough to pursue them, or do we regulate motivated smart people into discouraged despair?

(p. 9) One question we can frame is: Just how big a thing do I think A.I. is? And my stupid answer is: It’s more than a nothing burger, and it’s less than the total transformation of our society. My place holder is that it’s roughly on the scale of the internet in the late ’90s. I’m not sure it’s enough to really end the stagnation. It might be enough to create some great companies. And the internet added maybe a few percentage points to the G.D.P., maybe 1 percent to G.D.P. growth every year for 10, 15 years. It added some to productivity. So that’s roughly my place holder for A.I.

It’s the only thing we have. It’s a little bit unhealthy that it’s so unbalanced. This is the only thing we have. I’d like to have more multidimensional progress. I’d like us to be going to Mars. I’d like us to be having cures for dementia. If all we have is A.I., I will take it.

. . .

And so maybe the problems are unsolvable, which is the pessimistic view. Maybe there is no cure for dementia at all, and it’s a deeply unsolvable problem. There’s no cure for mortality. Maybe it’s an unsolvable problem.

Or maybe it’s these cultural things. So it’s not the individually smart person, but it’s how this fits into our society. Do we tolerate heterodox smart people? Maybe you need heterodox smart people to do crazy experiments.

. . .

I had a conversation with Elon a few weeks ago about this. He said we’re going to have a billion humanoid robots in the U.S. in 10 years. And I said: Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth, the growth will take care of this. And then — well, he’s still worried about the budget deficits. This doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it. But yeah, there’s some way in which these things are not quite thought through.

For the full interview, see:

Douthat, Ross. “Are We Dreaming Big Enough?” The New York Times, SundayOpinion Section (Sunday, June 29, 2025): 9.

(Note: ellipses added.)

(Note: the online version of the interview has the date June 26, 2025, and has the title “Peter Thiel and the Antichrist.”)

Peter Thiel’s yearning many years ago for flying cars was quoted more recently in:

Lewis-Kraus, Gideon. “Flight of Fancy.” The New Yorker, April 22, 2024, 28-39.

Lucian L. Leape Was Willing to Take the Ill-Will

In an earlier entry I presented Charlie Munger’s story where a hospital administrator had to be willing to absorb the ill-will, if he was to take the actions necessary to fix a badly malfunctioning department of the hospital. Another person willing to absorb the ill-will in order to reform medicine was Lucian L. Leape whose story is sketched in the passages quoted below.

(p. B21) Lucian L. Leape, a surgeon whose insights into medical mistakes in the 1990s gave rise to the field of patient safety, rankling much of the health care establishment in the process, died on Monday at his home in Lexington, Mass. He was 94.

. . .

In 1986, at age 56, Dr. Leape grew interested in health policy and spent a year at the RAND Corporation on a midcareer fellowship studying epidemiology, statistics and health policy.

Following his stint at RAND, he joined the team at Harvard conducting the Medical Practice Study. When Dr. Howard Hiatt, then the dean of the Harvard School of Public Health (now the Harvard T.H. Chan School of Public Health), offered Dr. Leape the opportunity to work on the study, “I accepted,” Dr. Leape wrote in his 2021 book, “Making Healthcare Safe: The Story of the Patient Safety Movement,” “not suspecting it would change my life.”

The most significant finding, Dr. Leape said in the 2015 interview, was that two-thirds of the injuries to patients were caused by errors that appeared to be preventable. “The implications were profound,” he said.

In 1994, Dr. Leape submitted a paper to The New England Journal of Medicine, laying out the extent to which preventable medical injury occurred and arguing for a shift of focus away from individuals and toward systems. But the paper was rejected. “I was told it didn’t meet their standards,” he recalled.

Dr. Leape sent the paper out again, this time to The Journal of the American Medical Association. Dr. George Lundberg, then the editor of JAMA, immediately recognized the importance of the topic, Dr. Leape said. “But he also knew it could offend many doctors. We didn’t talk about mistakes.”

Dr. Donald M. Berwick, president emeritus at the Institute for Healthcare Improvement in Boston and a longtime colleague of Dr. Leape’s, agreed. “To talk about error in medicine back then was considered rude,” he said in an interview in 2020. “Errors were what we call normalized. Bad things happen, and that’s just the way it is.”

“But then you had Lucian,” he added, “this quite different voice in the room saying, ‘No, this isn’t normal. And we can do something about it.’”

Dr. Leape’s paper, “Error in Medicine,” was the first major article on the topic in the general medical literature. The timing of publication, just before Christmas in 1994, Dr. Leape wrote in his 2021 book, was intentional. Dr. Lundberg knew it would receive little attention and therefore wouldn’t upset colleagues.

On Dec. 3, 1994, however, three weeks before the JAMA piece appeared, Betsy Lehman, a 39-year-old health care reporter for The Boston Globe, died after mistakenly receiving a fatal overdose of chemotherapy at the Dana-Farber Cancer Institute in Boston.

“Betsy’s death was a watershed event,” Dr. Leape said in a 2020 interview for a short documentary about Ms. Lehman.

The case drew national attention. An investigation into the death revealed that it wasn’t caused by one individual clinician, but by a series of errors involving multiple physicians and nurses who had misinterpreted a four-day regimen as a single dose, administering quadruple the prescribed amount.

The case made Dr. Leape’s point with tragic clarity: Ms. Lehman’s death, like so many others, resulted from a system that lacked sufficient safeguards to prevent the error.

. . .

Dr. Gawande said he believed it was the confidence Dr. Leape had acquired as a surgeon that girded him in the face of strong resistance from medical colleagues.

“He had enough arrogance to believe in himself and in what he was saying,” Dr. Gawande said. “He knew he was onto something important, and that he could bring the profession along, partly by goading the profession as much as anything.”

For the full obituary, see:

Katie Hafner. “Lucian L. Leape, 94, Who Put Patient Safety at Forefront, Is Dead.” The New York Times (Thursday, July 3, 2025): B21.

(Note: ellipses added.)

(Note: the online version of the obituary has the date July 1, 2025, and has the title “Lucian Leape, Whose Work Spurred Patient Safety in Medicine, Dies at 94.”)

Dr. Leape’s history of his efforts to increase healthcare safety can be found in:

Leape, Lucian L. Making Healthcare Safe: The Story of the Patient Safety Movement. Cham, Switzerland: Springer, 2021.

A.I. Only “Knows” What Has Been Published or Posted

A.I. “learns” by scouring language that has been published or posted. If outdated or never-true “facts” are posted on the web, A.I. may regurgitate them. It takes human eyes to check whether there really is a picnic table in a park.

(p. B1) Last week, I asked Google to help me plan my daughter’s birthday party by finding a park in Oakland, Calif., with picnic tables. The site generated a list of parks nearby, so I went to scout two of them out — only to find there were, in fact, no tables.

“I was just there,” I typed to Google. “I didn’t see wooden tables.”

Google acknowledged the mistake and produced another list, which again included one of the parks with no tables.

I repeated this experiment by asking Google to find an affordable carwash nearby. Google listed a service for $25, but when I arrived, a carwash cost $65.

I also asked Google to find a grocery store where I could buy an exotic pepper paste. Its list included a nearby Whole Foods, which didn’t carry the item.

For the full commentary see:

Brian X. Chen. “Underneath a New Way to Search, A Web of Wins and Imperfections.” The New York Times (Tues., June 3, 2025): B1 & B4.

(Note: the online version of the commentary has the date May 29, 2024, and has the title “Google Introduced a New Way to Use Search. Proceed With Caution.”)

How Did Ed Smylie and His Team Create the Kludge That Saved the Crew of Apollo 13?

Gary Klein in Seeing What Others Don’t analyzed cases of innovation, and sought their sources. One source he came up with was necessity. His compelling example was the firefighter Wag Dodge who, with maybe 60 seconds until he would be engulfed in flame, lit a match to the grass around him, and then laid down in the still-hot embers. The roaring fire bypassed the patch he pre-burned, and his life was saved. The story is well-told in Norman Maclean’s Young Men and Fire.

Pondering more cases of necessity might be useful to help us understand, and encourage, future innovation. One candidate might be the kludge that Ed Smylie and his engineers put together to save the Apollo 13 crew from suffocating after an explosion blew up their command capsule oxygen tank.

Necessity may be part of it, but cannot be the whole story. Humanity needed to fly for thousands of years, but it took Wilbur Wright to make it happen. (This point is made in Kevin Ashton’s fine and fun How to Fly a Horse.)

I have ordered the book co-authored by Lovell, and mentioned in a passage quoted below, in case it contains insight on how the Apollo 13 kludge was devised.

(p. B11) Ed Smylie, the NASA official who led a team of engineers that cobbled together an apparatus made of cardboard, plastic bags and duct tape that saved the Apollo 13 crew in 1970 after an explosion crippled the spacecraft as it sped toward the moon, died on April 21 [2025] in Crossville, Tenn. He was 95.

. . .

Soft-spoken, with an accent that revealed his Mississippi upbringing, Mr. Smylie was relaxing at home in Houston on the evening of April 13 when Mr. Lovell radioed mission control with his famous (and frequently misquoted) line: “Uh, Houston, we’ve had a problem.”

An oxygen tank had exploded, crippling the spacecraft’s command module.

Mr. Smylie, . . ., saw the news on television and called the crew systems office, according to the 1994 book “Lost Moon,” by Mr. Lovell and the journalist Jeffrey Kluger. The desk operator said the astronauts were retreating to the lunar excursion module, which was supposed to shuttle two crew members to the moon.

“I’m coming in,” Mr. Smylie said.

Mr. Smylie knew there was a problem with this plan: The lunar module was equipped to safely handle air flow for only two astronauts. Three humans would generate lethal levels of carbon dioxide.

To survive, the astronauts would somehow need to refresh the canisters of lithium hydroxide that would absorb the poisonous gases in the lunar excursion module. There were extra canisters in the command module, but they were square; the lunar module ones were round.

“You can’t put a square peg in a round hole, and that’s what we had,” Mr. Smylie said in the documentary “XIII” (2021).

He and about 60 other engineers had less than two days to invent a solution using materials already onboard the spacecraft.

. . .

In reality, the engineers printed a supply list of the equipment that was onboard. Their ingenious solution: an adapter made of two lithium hydroxide canisters from the command module, plastic bags used for garments, cardboard from the cover of the flight plan, a spacesuit hose and a roll of gray duct tape.

“If you’re a Southern boy, if it moves and it’s not supposed to, you use duct tape,” Mr. Smylie said in the documentary. “That’s where we were. We had duct tape, and we had to tape it in a way that we could hook the environmental control system hose to the command module canister.”

Mission control commanders provided step-by-step instructions to the astronauts for locating materials and building the adapter.

. . .

The adapter worked. The astronauts were able to breathe safely in the lunar module for two days as they awaited the appropriate trajectory to fly the hobbled command module home.

. . .

Mr. Smylie always played down his ingenuity and his role in saving the Apollo 13 crew.

“It was pretty straightforward, even though we got a lot of publicity for it and Nixon even mentioned our names,” he said in the oral history. “I said a mechanical engineering sophomore in college could have come up with it.”

For the full obituary, see:

Michael S. Rosenwald. “Ed Smylie Dies at 95; His Team of Engineers Saved Apollo 13 Crew.” The New York Times (Tuesday, May 20, 2025): B11.

(Note: ellipses, and bracketed year, added.)

(Note: the online version of the obituary was updated May 18, 2025, and has the title “Ed Smylie, Who Saved the Apollo 13 Crew With Duct Tape, Dies at 95.”)

Klein’s book that I praise in my introductory comments is:

Klein, Gary A. Seeing What Others Don’t: The Remarkable Ways We Gain Insights. Philadelphia, PA: PublicAffairs, 2013.

Maclean’s book that I praise in my introductory comments is:

Maclean, Norman. Young Men and Fire. new ed., Chicago: University of Chicago Press, 2017.

Ashton’s book that I praise in my introductory comments is:

Ashton, Kevin. How to Fly a Horse: The Secret History of Creation, Invention, and Discovery. New York: Doubleday, 2015.

The book co-authored by Lovell and mentioned above is:

Lovell, Jim, and Jeffrey Kluger. Lost Moon: The Perilous Voyage of Apollo 13. Boston, MA: Houghton Mifflin, 1994.

The Newest A.I. “Reasoning Models Actually Hallucinate More Than Their Predecessors”

I attended an I.H.S. Symposium last week where one of my minor discoveries was that a wide range of intellectuals, regardless of location on the political spectrum, share a concern for the allegedly damaging labor market effects of A.I.  As in much else I am an outlier–I am not concerned about A.I.

But since so many are concerned, and believe A.I. undermines my case for a better labor market under innovative dynamism, I will continue to occasionally highlight articles that present the evidence and arguments that reassure me.

(p. B1) “Humanity is close to building digital superintelligence,” Altman declared in an essay this week, and this will lead to “whole classes of jobs going away” as well as “a new social contract.” Both will be consequences of AI-powered chatbots taking over all our white-collar jobs, while AI-powered robots assume the physical ones.

Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren’t buying all that talk.

The title of a fresh paper from Apple says it all: “The Illusion of Thinking.” In it, a half-dozen top researchers probed reasoning models—large language models that “think” about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim.

. . .

(p. B4) Apple’s researchers found “fundamental limitations” in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered “complete accuracy collapse.” Similarly, engineers at Salesforce AI Research concluded that their results “underscore a significant gap between current LLM capabilities and real-world enterprise demands.”

Importantly, the problems these state-of-the-art AIs couldn’t handle are logic puzzles that even a precocious child could solve, with a little instruction. What’s more, when you give these AIs that same kind of instruction, they can’t follow it.

. . .

Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple’s paper, along with related work, exposes flaws in today’s reasoning models, suggesting they’re not the dawn of human-level ability but rather a dead end. “Part of the reason the Apple study landed so strongly is that Apple did it,” he says. “And I think they did it at a moment in time when people have finally started to understand this for themselves.”

In areas other than coding and mathematics, the latest models aren’t getting better at the rate that they once did. And the newest reasoning models actually hallucinate more than their predecessors.

For the full commentary see:

Christopher Mims. “Keywords: Apple Calls Today’s AI ‘The Illusion of Thinking’.” The Wall Street Journal (Sat., June 14, 2025): B1 & B4.

(Note: ellipses added.)

(Note: the online version of the commentary has the date June 13, 2025, and has the title “Keywords: Why Superintelligent AI Isn’t Taking Over Anytime Soon.” In the original print and online versions, the word “more” appears in italics for emphasis.)

Sam Altman’s blog essay mentioned above is:

Altman, Sam. “The Gentle Singularity.” In Sam Altman blog, June 10, 2025, URL: https://blog.samaltman.com/the-gentle-singularity.

The Apple research article briefly summarized in a passage quoted above is:

Shojaee, Parshin, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, Mehrdad Farajtabar. “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models Via the Lens of Problem Complexity.” Apple Machine Learning Research, June 2025, URL: https://machinelearning.apple.com/research/illusion-of-thinking.

Trump Is a Change Agent Because He Can Take the Ill Will of the Stagnationists

In an earlier blog entry I pondered Charlie Munger’s sage analysis that agents of change must often be willing to take the “ill will” aimed at them from the stagnationists who benefit from stasis. The stagnationists may be corrupt, or incompetent, or simply lack the imagination or the energy to do something in a better way.

Agents of change are scarce because most of us care a lot about what other people think of us. We experience psychic stress if we are systematically stigmatized, or even just ignored. Donald Trump may be capable of making major changes because, through temperament or resolve, he has found a way to shut out the psychic stress; a way to take the ill will.

Kessler’s commentary, quoted below, was published early in the pandemic, on Feb. 10, 2020. At that point Kessler believed that Trump’s success with the economy would get Trump re-elected. But as the months of 2020 rolled on, the pandemic increasingly hurt Trump’s prospects; hence the source of the pandemic still matters a lot, and whether the vaccine was intentionally delayed a few weeks, to release it just after the election, also still matters a lot.

A key question is whether Trump still has the core “agenda of tax cuts, deregulation and originalist judges” that Kessler believed was the Trump core agenda in 2016.

I hope yes, but fear no. In 2025 are tariffs and industrial policy part of the “distraction” (aka “MacGuffin”) Kessler posits, or are they part of Trump’s core agenda?

(p. A15) Is he a disease or a cure? Like him or hate him, there’s tons of spilled ink trying to assess President Trump’s governing style. To me, the key to understanding Trumpism is remembering why he was elected.

What do I mean? Voters chose Donald Trump as an antidote to the growing inflammation caused by the (OK, deep breath . . .) prosperity-crushing, speech-inhibiting, nanny state-building, carbon-obsessing, patriarchy-bashing, implicit bias-accusing, tokey-wokey, globalist, swamp-creature governing class—all perfectly embodied by the Democrats’ 2016 nominee. On taking office, Mr. Trump proceeded to hire smart people and create a massive diversion (tweets, border walls, tariffs) as a smokescreen to let them implement an agenda of tax cuts, deregulation and originalist judges.

Those reforms have left the market free to do its magic and got the economy grooving like it’s 1999. The daily Trump hurricane—like the commotion over the Chiefs from Kansas—makes the media focus on the all-powerful wizard while ignoring the policy makers behind the curtain.

Alfred Hitchcock called this kind of distraction a “MacGuffin”—something that moves the plot along and provides motivation for the characters, but is itself unimportant, insignificant or irrelevant. It can be a kind of sleight of hand, a distraction, and Mr. Trump uses his own public persona as a MacGuffin in precisely that way. The mobs decked in “Resist” jewelry fall for it every time.

For example, Sen. Bernie Sanders used his remarks during the Senate impeachment trial to point out that the media had documented some 16,200 alleged lies by President Trump. The MacGuffin worked! Mr. Sanders and his peers are focused on the president’s words, while most voters see the real plot unfolding in America—millions of jobs and rising wages.

The president’s success comes from his ability to shrug off critics.  . . .  Rather than cower at the criticism he faces from the mobs, he probably smirks and thinks to himself, “Yeah, I don’t believe in that” and tweets away.

That’s the only reaction that can withstand today’s far left, which has become increasingly self-righteous.

For the full commentary see:

Andy Kessler. “President Donald J. MacGuffin.” The Wall Street Journal (Monday, February 10, 2020 [sic]): A17.

(Note: ellipses added.)

(Note: the online version of the commentary has the date February 9, 2020 [sic], and has the same title as the print version.)

Innovative Project Entrepreneur Mike Wood Helped Children Leapfrog the Reading Skills Taught at School

In my Openness book I argue that the kind of entrepreneurs who matter most in changing the world are what I call “project entrepreneurs,” those who are on a mission to bring their project into the world. Mike Wood, discussed in the obituary quoted below was a project entrepreneur.

I sometimes wonder how much formal education would be desirable in a world, unlike our world, that lacked creeping credentialism. Samuel Smile’s biography of George Stephenson says that he had zero formal education, but early-on paid someone from his meagre wages in the mines, to teach him to read. After that, the inventor and innovative entrepreneur read prodigiously to became an exemplary autodidact.

Today, Stephenson could learn to read through phonics technology like the LeapFrog pads developed by Mike Wood.

[By the way, Mike Wood, like Danny Kahneman recently, committed suicide at Dignitis in Switzerland in anticipation of declining health, in Wood’s case Alzheimer’s. As a libertarian, I believe they had a right to do this, but were they right to exercise this right? Harvard psychology professor Daniel Gilbert cites (2006, pp. 166-168) research showing that people generally underestimate their resilience in the face of major health setbacks. They can often recalibrate to their new more limited capabilities, continuing to find challenges they find fulfilling to overcome. If so, then maybe Wood and Kahneman were wrong to exercise their right to end their lives. (More than most of my claims, I very readily admit this one could be wrong.)]

(p. A21) Mike Wood was a young father when his toddler’s struggles to read led him to develop one of a generation’s most fondly remembered toys.

Mr. Wood’s 3-year-old son, Mat, knew the alphabet but couldn’t pronounce the letter sounds. A lawyer in San Francisco, Mr. Wood had a new parent’s anxiety that if his child lagged as a reader, he would forever struggle in life.

So on his own time, Mr. Wood developed the prototype of an electronic toy that played sounds when children squeezed plastic letters. He based the idea on greeting cards that played a tune when opened.

Mr. Wood went on to found LeapFrog Enterprises, which in 1999 introduced the LeapPad, a child’s computer tablet that was a kind of talking book.

The LeapPad was a runaway hit, the best-selling toy of the 2000 holiday season, and LeapFrog became one of the fastest-growing toy companies in history.

. . .

Former colleagues recalled Mr. Wood as a demanding entrepreneur who was driven by a true belief that technology could help what he called “the LeapFrog generation” gain an educational leg up.

He had “famously fluffy hair,” Chris D’Angelo, LeapFrog’s former executive director of entertainment, wrote of Mr. Wood on The Bloom Report, a toy industry news site. “When stressed, he’d unconsciously rub his head — and the higher the hair, the higher the stakes. We (quietly) called them ‘high-hair days.’ It was funny, but also telling. He felt everything deeply — our work, our mission, our audience.”

. . .

A shift in reading pedagogy in the 1990s toward phonics — helping early readers make a connection between letters and sounds — drove interest in LeapFrog’s products among parents and teachers.

. . .

In 2023, his daughter-in-law, Emily Wood, posted a TikTok video of Mr. Wood teaching her daughter to use a forerunner of the LeapPad. The video received 391,000 likes and thousands of comments.

“I owe him my entire childhood,” one viewer wrote. “I spent hours on my LeapFrog with my ‘Scooby-Doo’ and ‘Shrek’ books.”

“I sell books now because of him,” another viewer wrote.

“I’m learning disabled and have a stutter,” wrote a third. “This man helped me learn to speak.”

“I’m 25 and I loved my LeapFrog,” a fourth commented. “Coming from an immigrant family, reading made me have so much imagination. I never stopped reading.”

For the full obituary, see:

Trip Gabriel. “Mike Wood, 72, Dies; Taught a Generation With LeapFrog Toys.” The New York Times (Monday, April 21, 2025): A21.

(Note: ellipses, and bracketed date, added.)

(Note: the online version of the obituary has the date April 19, 2025, and has the title “Mike Wood, Whose LeapFrog Toys Taught a Generation, Dies at 72.”)

My book mentioned in my initial comments is:

Diamond, Arthur M., Jr. Openness to Creative Destruction: Sustaining Innovative Dynamism. New York: Oxford University Press, 2019.

Smiles’s biography of Stephenson is:

Smiles, Samuel. The Locomotive: George and Robert Stephenson. New and Revised ed, Lives of the Engineers. London: John Murray, Albemarle Street, 1879.

Daniel Gilbert’s book that I mention in my opening comments is:

Gilbert, Daniel. Stumbling on Happiness. New York: Alfred A. Knopf, 2006.

“A.I.s Are Overly Complicated, Patched-Together Rube Goldberg Machines Full of Ad-Hoc Solutions”

A.I. can be a useful tool for searching and summarizing the current state of consensus knowledge. But I am highly dubious that it will ever be able to make the breakthrough leaps that some humans are sometimes able to make. And I am somewhat dubious that it will ever be able to make the resilient pivots that all of us must sometimes make in the face of new and unexpected challenges.

(p. B2) In a series of recent essays, [Melanie] Mitchell argued that a growing body of work shows that it seems possible models develop gigantic “bags of heuristics,” rather than create more efficient mental models of situations and then reasoning through the tasks at hand. (“Heuristic” is a fancy word for a problem-solving shortcut.)

When Keyon Vafa, an AI researcher at Harvard University, first heard the “bag of heuristics” theory, “I feel like it unlocked something for me,” he says. “This is exactly the thing that we’re trying to describe.”

Vafa’s own research was an effort to see what kind of mental map an AI builds when it’s trained on millions of turn-by-turn directions like what you would see on Google Maps. Vafa and his colleagues used as source material Manhattan’s dense network of streets and avenues.

The result did not look anything like a street map of Manhattan. Close inspection revealed the AI had inferred all kinds of impossible maneuvers—routes that leapt over Central Park, or traveled diagonally for many blocks. Yet the resulting model managed to give usable turn-by-turn directions between any two points in the borough with 99% accuracy.

Even though its topsy-turvy map would drive any motorist mad, the model had essentially learned separate rules for navigating in a multitude of situations, from every possible starting point, Vafa says.

The vast “brains” of AIs, paired with unprecedented processing power, allow them to learn how to solve problems in a messy way which would be impossible for a person.

. . .

. . ., today’s AIs are overly complicated, patched-together Rube Goldberg machines full of ad-hoc solutions for answering our prompts. Understanding that these systems are long lists of cobbled-together rules of thumb could go a long way to explaining why they struggle when they’re asked to do things even a little bit outside their training, says Vafa. When his team blocked just 1% of the virtual Manhattan’s roads, forcing the AI to navigate around detours, its performance plummeted.

This illustrates a big difference between today’s AIs and people, he adds. A person might not be able to recite turn-by-turn directions around New York City with 99% accuracy, but they’d be mentally flexible enough to avoid a bit of roadwork.

For the full commentary see:

Christopher Mims. “We Now Know How AI ‘Thinks.’ It Isn’t Thinking at All.” The Wall Street Journal (Saturday, April 26, 2025): B2.

(Note: ellipses added.)

(Note: the online version of the commentary has the date April 25, 2025, and has the title “We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All.”)

A conference draft of the paper that Vafa co-authored on A.I.’s mental map of Manhattan is:

Vafa, Keyon, Justin Y. Chen, Ashesh Rambachan, Jon Kleinberg, and Sendhil Mullainathan. “Evaluating the World Model Implicit in a Generative Model.” In 38th Conference on Neural Information Processing Systems (NeurIPS). Vancouver, BC, Canada, Dec. 2024.

Muriel Bristol Was Allowed to Act on What She Knew but Was Unable to Prove or Explain

Muriel Bristol knew that tea tasted better when the milk was poured in first, than when it was poured in after the tea. She knew it but couldn’t prove it and didn’t know why it was true. The world is better when more of us, more often, can act on what we know, but what we can neither prove nor explain. Too often regulations restrict the actions of entrepreneurs to what they can prove and explain, e.g., in the firing of employees.

This slows and reduces efficiency and innovation (not to mention freedom).

(p. C8) [Adam] Kucharski, a mathematically trained epidemiologist, says that the rigor and purity of mathematics has imbued it with extraordinary rhetorical power. “In an uncertain world, it is reassuring to think there is at least one field that can provide definitive answers,” he writes. Yet he adds that certainty can sometimes be an illusion. “Even mathematical notions of proof” are “not always as robust and politics-free as they might seem.”

. . .

. . ., proving what is “obvious and simple” isn’t always easy. Kucharski offers the delightful example of Muriel Bristol, a scientist who always put the milk in her cup before pouring her tea, because she insisted it tasted better. In the 1920s, a skeptical statistician designed a blind taste test to see if Bristol could distinguish between cups of milk-then-tea and cups of tea-then-milk. Bristol got all of them right. In 2008, the Royal Society of Chemistry reported that when milk is poured into hot tea, “individual drops separate from the bulk of the milk” and allow “significant denaturation to occur.” The result is a burnt flavor. Eighty years after Bristol was statistically vindicated, she was chemically vindicated too.

For the full review see:

Jennifer Szalai. “Proving It Doesn’t Necessarily Make It True.” The New York Times (Saturday, May 3, 2025): C8.

(Note: ellipses, and bracketed name, added.)

(Note: the online version of the review has the date April 30, 2025, and has the title “Just Because You Can Prove It Doesn’t Make It True.”)

The book under review is:

Kucharski, Adam. Proof: The Art and Science of Certainty. New York: Basic Books, 2025.