“Scientific Knowledge Can Lie Beyond Language”

By Arthur M. Diamond, Jr.
First Posted to The Institute of Arts and Ideas web site on Weds., March 25, 2026

An experienced nurse in the neonate intensive care unit is mostly focused on the infant who is her main responsibility. But she notices that a nearby infant is cycling through minor changes in skin color. That infant’s primary nurse sees it too. The infant then turns blue-black. The experienced nurse knows it is pneumopericardium, where air pressure around the heart keeps the heart from sending blood into the infant’s body. She knows it because she had been the nurse for an infant who died from pneumopericardium. The heart monitor misleadingly seems to show that the heart is still beating, so the infant’s primary nurse thinks the problem is a collapsed lung. As the chief doctor arrives the experienced nurse “slaps a syringe in his hand” and tells him to “stick the heart” to release the air. An x-ray tech confirms the diagnosis, the doctor acts, and the infant lives.

The experienced nurse was out of line. The infant who lived was not her responsibility and it was not her job to tell the chief doctor what to do. She could have been punished, but she took a chance and acted on intuitive knowledge that she could not immediately articulate. She had intuition that proved correct and her acting on her intuition was literally a matter of life and death. But we set up barriers to discourage ourselves and others from acting on our intuition. We establish regulations, credentials, protocols, manuals.

___

Breakthrough innovative entrepreneurs often have limited formal theoretical knowledge, but high levels of informal unarticulated knowledge.

___

Dr Min Chiu Li was an oncologist at the National Cancer Institute in the US in the early days of administering methotrexate chemotherapy to women who had choriocarcinoma cancer. The NCI protocol mandated that when the visible symptoms of cancer were gone, he should stop chemotherapy. But he had an intuition that small amounts of cancer still lurked after the visible symptoms were gone. So he violated the protocol and gave his patients a longer course of chemotherapy. The administrators at the NCI fired Min Chiu Li for violating the protocol, but later were surprised to observe that the patients treated according to the protocol were dead, and the patients treated by Min Chiu Li were alive.

In my book Openness to Creative Destruction I argue that breakthrough innovative entrepreneurs often have limited formal theoretical knowledge, but high levels of informal unarticulated knowledge. Henry Ford, Bill Gates, and Steve Jobs did not graduate from college. What is true of innovative entrepreneurs is also often true of innovative scientists. When James Watson and Francis Crick had lunch with famous biochemist Erwin Chargaff, Crick could not remember the well-known chemical details of the four bases of DNA. Chargaff dismissed the pair with contempt. Watson and Crick did not excel in the memorization of theory but had intuition that allowed them to see the double-helix structure of DNA. AI expert Melanie Mitchell and cognitive psychologist Gary Klein agree that we have more unarticulated knowledge than articulated knowledge.

But we too seldom ask: how useful is it? Or an even better question is: how useful could it be if we sought to make good use of it rather than sought to ignore or block it? Unarticulated knowledge deserves deeper study, and Klein is one of those who has made a start. Over his career he has modified his taxonomy of the types of unarticulated knowledge. What I am calling “unarticulated knowledge” he calls “tacit knowledge,” a label I prefer to reserve for the kind of muscle-memory bike-riding example the phrase’s originator, Michael Polanyi, made famous. In one of his later efforts, Klein distinguishes five types of unarticulated knowledge: Perceptual, Conceptual, Embodied, Social, and Metacognitive. The type I am most concerned with in this article is the Conceptual, within which he includes: “pattern recognition; mental models; expectancies; mindsets; noticing the absence of expected events; imagining antecedents and anticipating consequences; seeing affordances.”

The size and importance of unarticulated knowledge has implications for the current worries that the growth of AI will create widespread job loss. If worker productivity depends importantly on their unarticulated knowledge, and if AI models are trained solely on databases of articulated knowledge, then we have built-in limits on the extent to which AI can replace humans in the labor market.

The level of regulations in the US has steadily increased over many decades, at the same time that the number of breakthrough innovations has fallen (see also Graeber 2012; Huebner 2005). We may be wrong to rely so much on regulations, credentials, protocols, and manuals, but we do not do so out of simple stupidity or evil intent. We do so for several plausible reasons.

One of the reasons is because unarticulated knowledge is often called (including by Klein and me) “intuition” and we associate intuition with mysticism, knowing that mystics have often made predictions that proved false. We also know that our intuition is sometimes systematically biased in a variety of ways. Daniel Kahneman has many examples in his Thinking, Fast and Slow, including, for example, the anchoring effect, confirmation bias, and loss aversion.

But Klein thinks we sell ourselves short if we dismiss intuition. The intuition that he defends is based on experienced patterns, not mystical epiphany. This kind of intuition is on solid ground partly because it often can be articulated when we have enough time to do so, and when it is worth the time to do so.

In a life-and-death case, Lieutenant Commander Michael Riley on the British HMS Gloucester had about 90 seconds to decide if the object coming towards them on the radar was a friendly American plane or a hostile Silkworm missile. Riley was sure the object was hostile, and at the last second shot it down. He could not explain how he had known and why he was sure. By asking a series of shrewd probing questions, Klein teased out how Riley had known that the blip was hostile. Although the plane and the Silkworm flew at different altitudes, the radar did not directly report altitude. But experienced and focused users of the radar could infer altitude from the distance from the shore when a blip first became visible. In this case, Riley did not have the time to articulate the unarticulated, but later Klein, with Riley’s help, proved that it could be done.

Another reason we rely so much on regulations, credentials, protocols, and manuals is that we worry that we have no good way of judging other people’s claims to have unarticulated knowledge. So we worry that the unscrupulous might take advantage of us. This worry often arises in situations subject to what economists call “the principal-agent problem.” The problem arises when the principal pays the agent to do a task; then the agent takes the money but doesn’t do the task.

The principal-agent problem often exists even when we are dealing with articulated knowledge. An increasing number of scientific journal articles and grant proposals are fraudulent. The journals and the grant agencies are paying (in terms of resume entries and money grants) for bogus research. A prominent sad example is Alzheimer’s research. Charles Piller, a journalist at the distinguished journal Science, has expanded his exposé articles into the book Doctored, documenting that much of the leading research has been fraudulent, helping to explain why progress against this major disease has been so limited. The victims include first and foremost those suffering from Alzheimer’s, but also the taxpayers who fund government research grants, and the Alzheimer’s researchers whose honest but modest results have been rejected for publication and grants because they falsely seem inferior to the fraudulent results.

So we guard against unarticulated knowledge because we worry that if we can be so extensively defrauded when we are dealing with claims of articulated knowledge, how much more extensively will we be defrauded if we do not protect ourselves against claims of unarticulated knowledge?

The principal-agent problem is even more severe in common situations where the principal is acting as a fiduciary for others. So a government grant-giver has a moral duty to act prudently since he is acting as a fiduciary for the taxpayer. And a venture-capital fund investor has a moral duty to act prudently since he is acting as a fiduciary for the investors in his fund.

___

We should seek opportunities to fund on the basis of performance, not based on committee evaluation of written proposals.

___

This contrasts with angel investors who invest their own money and so can morally take greater risks based on more tenuous hunches. When the Omaha billionaire Walter Scott spoke to one of my classes I asked if he had been aware of the technological concerns George Gilder had about the Level 3 fiber optics network firm in which Scott had heavily invested. His somewhat gruff response was that he didn’t know technology, but he did know Jim Crowe, the founder of Level 3. Scott was spending his own money so he was not violating any fiduciary responsibility in mistakenly investing in Level 3.

When the principal and the agent are the same person, the principal-agent problem disappears. When the principal is spending their own money, the principal-agent problem is at least mitigated.

We can avoid the principal-agent problem by making it easier for entrepreneurs and scientists to self-fund their ventures and research, thus avoiding the principal-agent problem. For entrepreneurs this can be done by letting them keep the funds that they earn through successful entrepreneurship. Those who have given us the fullest proof of the value of their innovation by succeeding in the marketplace, are allowed to keep the wealth they thereby earn, so they can try it again. These are the serial innovative entrepreneurs like Commodore Vanderbilt, Steve Jobs, and Elon Musk. (The builders of a new computer in Tracy Kidder’s The Soul of a New Machine, compared what they were doing to the game of pinball, where the reward for doing it well is the chance to do it again.)

New York Times financial columnist Andrew Ross Sorkin wrote a column criticizing Steve Jobs for not signing onto the Bill Gates Foundation pledge to give the foundation a large part of his wealth. Jobs was famously known for his intuition about which new products would be “insanely great.” By retaining wealth from previous successes, he could quickly pivot to the next “insanely great” product as a new idea emerged, without having to articulate and sell the idea to a board of directors or to venture capitalists or to Wall Street. So we should encourage successful innovative entrepreneurs to reject the advice of Andrew Ross Sorkin, and instead hold onto their wealth. And we should oppose legislation being proposed in the US Senate to tax all substantial wealth, including that of deserving serial innovative entrepreneurs.

If a successful innovative entrepreneur runs out of new ideas himself, then rather than use his wealth for general charity, he should try to find and invest in other would-be-innovative entrepreneurs who share the traits that enabled the innovative entrepreneur’s own success. (PayPal entrepreneur Peter Thiel and Netscape entrepreneur Marc Andreesen are following this advice.)

We should seek opportunities to fund on the basis of performance, not based on committee evaluation of written proposals. George Stephenson had no formal education and was not very articulate. He could not give a good explanation of why the safety lamp he invented would prevent miners from dying of gas explosions. But he proved it by entering a mine with the lamp and walking toward a chamber known to contain gas. Later and more famously, Stephenson’s Rocket locomotive was not the sleekest looking in the Rainhill Trials contest, and Stephenson was not the most articulate defender of his entry, but unlike the other locomotives that in one way or another broke down, the Rocket kept chugging along. DARPA is one of the more successful government funders of new technology. They often fund based on contests. The X-prizes, founded by Peter Diamandis, are a private-sector effort to fund based on performance.

To reduce the principal-agent problem in science, we should be more open to citizen scientists self-funding their own research, as was commonly done in an earlier period of science, and as has recently been done by neuroscientist Jeff Hawkins, who earned his wealth by being the entrepreneur who developed the successful PalmPilot personal data assistant. The motto of the first scientific society, the Royal Society of England, was Nullius in verba (take no one’s word for it), meaning that anyone who was willing to show the evidence for their findings could participate in science. Make citizen science respectable again. Even today, not all successful innovative scientists rise through the Ivy league or through Oxford and Cambridge.

We should also experiment to find better ways to fund science where self-funding is not possible. We should consider Robin Hanson’s institutional innovation of a betting market where would-be scientists could bet on scientific propositions. Besides finding ways for would-be scientists to self-fund, we should find ways to reduce the amount of funds needed to participate. Universities could be made more efficient. The costs of entry to doing science in some disciplines is already low; citizen scientists make important contributions to astronomy, archeology, and botany. And the costs of contributing to science in other areas should be reduced by reducing regulations.

More broadly we can encourage managers at all levels to give decision rights to their employees. Assign them domains of action where they will not be micro-managed, where they can be alert to patterns and act on the patterns they observe, where they can make use of their unarticulated intuition. Within those domains the employee is not second-guessed by a micro-managing boss or a detailed operational manual.

BIBLIOGRAPHY (not posted in IAI online version):

Barber, Charles. In the Blood: How Two Outsiders Solved a Centuries-Old Medical Mystery and Took on the Us Army. New York: Grand Central Publishing, 2023.

Christensen, Clayton M., and Henry J. Eyring. The Innovative University: Changing the DNA of Higher Education from the inside Out. San Francisco, CA: Jossey-Bass, 2011.

Cowen, Tyler. The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better. New York: Dutton Adult, 2011.

DeVita, Vincent T., and Elizabeth DeVita-Raeburn. The Death of Cancer: After Fifty Years on the Front Lines of Medicine, a Pioneering Oncologist Reveals Why the War on Cancer Is Winnable–and How We Can Get There. New York: Sarah Crichton Books, 2015.

Diamandis, Peter H., and Steven Kotler. Bold: How to Go Big, Create Wealth and Impact the World. New York: Simon & Schuster, 2015.

Diamond, Arthur M., Jr. “How to Cure Cancer: Unbinding Entrepreneurs in Medicine.” Journal of Entrepreneurship and Public Policy 7, no. 1 (2018): 62–73.

Diamond, Arthur M., Jr. Openness to Creative Destruction: Sustaining Innovative Dynamism. New York: Oxford University Press, 2019.

Gigerenzer, Gerd. Gut Feelings: The Intelligence of the Unconscious. New York: Penguin Books, 2007.

Graeber, David. “Of Flying Cars and the Declining Rate of Profit.” The Baffler, no. 19 (2012). https://thebaffler.com/salvos/of-flying-cars-and-the-declining-rate-of-profit

Hanson, Robin. “Could Gambling Save Science? Encouraging an Honest Consensus.” Social Epistemology 9, no. 1 (Jan.-March 1995): 3–33.

Hawkins, Jeff. A Thousand Brains: A New Theory of Intelligence. New York: Basic Books, 2021.

Hawkins, Jeff, and Sandra Blakeslee. On Intelligence. New York: Times Books, 2004.

Huebner, Jonathan. “A Possible Declining Trend for Worldwide Innovation.” Technological Forecasting and Social Change 72, no. 8 (Oct. 2005): 980–86.

Jena, Anupam B., and Christopher M. Worsham. Random Acts of Medicine: The Hidden Forces That Sway Doctors, Impact Patients, and Shape Our Health. New York: Doubleday, 2023.

Jenkins, Tania M. Doctors’ Orders: The Making of Status Hierarchies in an Elite Profession. New York: Columbia University Press, 2020.

Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

Kidder, Tracy. The Soul of a New Machine. 1st ed. Boston: Little, Brown and Co., 1981.

Klein, Gary A. Sources of Power: How People Make Decisions. 20th Anniversary ed. Cambridge, MA: The MIT Press, [1997] 2017.

Klein, Gary A. “Unpacking Tacit Knowledge; Applying the Tacit Knowledge Concept More Effectively.” Psychology Today, 2023. https://www.psychologytoday.com/us/blog/seeing-what-others-dont/202307/unpacking-tacit-knowledge

Landes, David S. “Why Europe and the West? Why Not China?” Journal of Economic Perspectives 20, no. 2 (Spring 2006): 3–22.

McLaughlin, Patrick A., and Oliver Sherouse. “The Impact of Federal Regulation on the 50 States, 2016 Edition.” Arlington, VA: Mercatus Center, 2016.

Mitchell, Melanie. Melanie Mitchell on Artificial Intelligence. EconTalk, interviewed by Russ Roberts, Jan. 6. 2020. https://www.econtalk.org/melanie-mitchell-on-artificial-intelligence/

Park, Michael, Erin Leahey, and Russell J. Funk. “Papers and Patents Are Becoming Less Disruptive over Time.” Nature 613, no. 7942 (Jan. 2023): 138–44.

Piller, Charles. Doctored: Fraud, Arrogance, and Tragedy in the Quest to Cure Alzheimer’s. New York: Atria/One Signal Publishers, 2025.

Polanyi, Michael. Personal Knowledge: Towards a Post-Critical Philosophy. Chicago: University of Chicago Press, 1958.

Polanyi, Michael. The Tacit Dimension. Garden City, New York: Doubleday & Co., 1966.

Prentice, Claire. “Miracle at Coney Island: How a Sideshow Doctor Saved Thousands of Babies and Transformed American Medicine.” Kindle Single, 2016.

Richardson, Reese A. K., Spencer S. Hong, Jennifer A. Byrne, Thomas Stoeger, and Luís A. Nunes Amaral. “The Entities Enabling Scientific Fraud at Scale Are Large, Resilient, and Growing Rapidly.” Proceedings of the National Academy of Sciences 122, no. 32 (2025): e2420092122.

Rosen, William. The Most Powerful Idea in the World: A Story of Steam, Industry, and Invention. New York: Random House, 2010.

Smiles, Samuel. The Locomotive: George and Robert Stephenson. New and Revised ed, Lives of the Engineers. London: John Murray, Albemarle Street, 1879.

Sorkin, Andrew Ross. “Dealbook; the Mystery of Steve Jobs’s Public Giving.” The New York Times (Tues., Aug. 30, 2011): B1 & B4.

Watson, James D. The Double Helix: A Personal Account of the Discovery of the Structure of DNA. New York: Scribner Classics, [1968] 2011.

The article above was published behind a paywall on the web site of The Institute of Art and Ideas. I retain the copyright, so I am reposting the article here. I submitted a bibliography and internal parenthetical references, but following their usual formatting, they did not post those, but instead incorporated select web links to some of the sources. My submitted title was “Making the Most of Unarticulated Knowledge.” IAI did not like that title, so they chose: “Scientific Knowledge Can Lie Beyond Language.” I did not veto their title although I regretted that it neglected the practical implications of my article, which to me are as important as the scientific implications. The citation for the original posting of the article on IAI is:

Diamond, Arthur M. “Scientific Knowledge Can Lie Beyond Language.” Posted on March 25, 2026. The Institute of Arts and Ideas. Available from https://iai.tv/articles/scientific-knowledge-can-lie-beyond-language-auid-3530.

Arthur Diamond’s “Scientific Knowledge Can Lie Beyond Language” Posted at The Institute of Art and Ideas Web Platform

My agreement with iai allows me to separately post my article, which I plan to do in a future blog post. The title as it appears on the iai platform was chosen by the iai editors. I preferred a title that emphasized the implications of unarticulated knowledge for practice, not just for science. I had the right, if I objected strongly to their title, to veto it. I chose not to veto.

Frank Knight on the Leader of the V-Formation of Ducks

I write this on Thurs., Feb. 19, 2026. Yesterday evening, I was reading a section of Milton and Rose Friedman’s Free to Choose on the Negative Income Tax, as part of my revising a paper I have submitted to The Independent Review. As I was reading, I was surprised and numbly elated to serendipitously run across information that I had been seeking, off and on, literally for decades. Every so often I had occasion to tell a story that I was sure originated with Frank Knight. I wrote the script on Frank Knight for an audio series on Great Economists. (The current owners of the series refuse to pay me the royalties that I am owed, but that is another story.) So I thought I knew something about Knight, and own many books and articles by him. Every once in a while I spent an hour or so looking for the quotation, always failing. I even emailed Ross Emmett who many view as the current leading expert on Knight. He said he knew nothing of the quote I sought.

Buddhists who are totally at peace do not carry around with them the annoyance of unanswered questions, so if they run across an answer, it means nothing to them. Maybe this helps understand what Pasteur meant when he lectured that “chance favors only the prepared mind” (1854). The prepared mind carries around unanswered questions, unresolved contradictions, flaws in the world that could use improving. Then that mind stays alert for answers to the questions, resolutions to the contradictions, fixes for the flaws. The mind that pulls us forward is not a mind at peace.

[As an addendum, my discovery of the quote in Milton and Rose Friedman’s most famous book, after many searches in much more obscure places, reminds me of what Gertrude Himmelfarb said in a lecture at the U. of Chicago when I was a graduate student many decades ago. She searched the dusty archives long and hard, but the material most useful for her book on Harriet Taylor’s influence on Mill’s On Liberty, was hiding in plain sight in a volume written by F.A. Hayek on Mill’s correspondence with Taylor.]

Here after decades of occasional search and constant alertness, is the testimony of Milton and Rose, two former students of Frank Knight, showing that my memory of the Frank Knight duck V-formation story was not a dream or hallucination:

Our great and revered teacher Frank H. Knight was fond of illustrating different forms of leadership with ducks that fly in a V with a leader in front. Every now and then, he would say, the ducks behind the leader would veer off in a different direction while the leader continued flying ahead. When the leader looked around and saw that no one was following, he would rush to get in front of the V again. That is one form of leadership—undoubtedly the most prevalent form in Washington.

Source of Milton and Rose quote is:

Steve Lohr. “A.I. Is Poised to Put Midsize Cities on the Map.” The New York Times (Mon., December 30, 2025): B1-B2.

The Himmelfarb book mentioned in my initial comments is:

Himmelfarb, Gertrude. On Liberty and Liberalism: The Case of John Stuart Mill. New York: Knopf, 1974.

The Hayek book mentioned in my initial comments is:

Hayek, F.A. John Stuart Mill and Harriet Taylor: Their Friendship and Subsequent Marriage. London: Routledge & Kegan Paul, 1951. [Some citations to the book have the word “Correspondence” substituted for “Friendship.”]


Entrepreneurs Make Leaps: A Critique of the Theory of the Adjacent Possible (TAP)

In my Openness book, I argue that the innovative entrepreneur is a key agent of the innovative dynamism that brings us the new goods and the process innovations through which we flourish. The Theory of the Adjacent Possible, devised by Stuart Kauffman, Roger Koppl, and collaborators, and popularized by Steven Johnson, aims to “deflate” the innovative entrepreneur, and argues that technological progress is an inevitable result of a stochastic process. I have written an extended critique of the TAP, and have posted the latest version to the SSRN working paper archive. In some ways the working paper, especially the last half, can be viewed as further elaboration and illustration of some of the points made in Openness.

The citation for, and link to, my working paper is:

Diamond, Arthur M. “Entrepreneurs Make Leaps: A Critique of the Theory of the Adjacent Possible.” (Written Jan. 26, 2026; Posted Feb. 18, 2026). Available at SSRN: https://ssrn.com/abstract=6166326

My book mentioned in my initial comments is:

Diamond, Arthur M., Jr. Openness to Creative Destruction: Sustaining Innovative Dynamism. New York: Oxford University Press, 2019.


Large Randomized Controlled Trial Finds Little Benefit in Free Money to Poor, Undermining Case for Universal Basic Income (UBI)

A variety of arguments have been made in support of a Universal Basic Income (UBI). I am most interested in the argument that says that technology will destroy the jobs of the worst off, and so for them to survive society would be justified in giving them a basic income. I do not believe that in a free society technological progress will on balance destroy the jobs of the worst off. If innovative entrepreneurs are free to innovate, especially in labor markets, they will find ways to employ the worst off.

Others have argued that giving a basic income to the worst off will make them better parents, measurable by better child outcomes in terms of language skills and better behavior and cognition. Several years ago these advocates setup a big, expensive randomized controlled trial to test their argument. The results? None of their hypotheses were supported. The passages quoted below are from a front page New York Times article in which they express their surprise, and for some, their incredulity.

(p. A1) If the government wants poor children to thrive, it should give their parents money. That simple idea has propelled an avid movement to send low-income families regular payments with no strings attached.

Significant but indirect evidence has suggested that unconditional cash aid would help children flourish. But now a rigorous experiment, in a more direct test, found that years of monthly payments did nothing to boost children’s well-being, a result that defied researchers’ predictions and could weaken the case for income guarantees.

After four years of payments, children whose parents received $333 a month from the experiment fared no better than similar children without that help, the study found. They were no more likely to develop language skills, avoid behavioral problems or developmental delays, demonstrate executive function or exhibit brain activity associated with cognitive development.

“I was very surprised — we were all very surprised,” said Greg J. Duncan, an economist at the University of California, Irvine and one of six researchers who led the study, called Baby’s First Years. “The money did not (p. A15) make a difference.”

The findings could weaken the case for turning the child tax credit into an income guarantee, as the Democrats did briefly four years ago in a pandemic-era effort to fight child poverty.

. . .

Though an earlier paper showed promising activity on a related neurological measure in the high-cash infants, that trend did not endure. The new study detected “some evidence” of other differences in neurological activity between the two groups of children, but its significance was unclear.

While researchers publicized the earlier, more promising results, the follow-up study was released quietly and has received little attention. Several co-authors declined to comment on the results, saying that it was unclear why the payments had no effect and that the pattern could change as the children age.

For the full story see:

Jason DeParle. “Cash Stipends Did Not Benefit Needy Children.” The New York Times (Weds., July 30, 2025): A1 & A15.

(Note: ellipsis added.)

(Note: the online version of the story has the date July 28, 2025, and has the title “Study May Undercut Idea That Cash Payments to Poor Families Help Child Development.”)

The academic presentation of the research discussed above, can be found in:

Noble, Kimberly, Greg Duncan, Katherine Magnuson, Lisa A. Gennetian, Hirokazu Yoshikawa, Nathan A. Fox, Sarah Halpern-Meekin, Sonya Troller-Renfree, Sangdo Han, Shannon Egan-Dailey, Timothy D. Nelson, Jennifer Mize Nelson, Sarah Black, Michael Georgieff, and Debra Karhson. “The Effect of a Monthly Unconditional Cash Transfer on Children’s Development at Four Years of Age: A Randomized Controlled Trial in the U.S.” National Bureau of Economic Research (NBER) Working Paper 33844, May 2025.

AI Cannot Know What People Think “At the Very Edge of Their Experience”

The passages quoted below mention “the advent of generative A.I.” From previous reading, I had the impression that “generative A.I” meant A.I. that had reached human level cognition. But when I looked up the meaning of the phrase, I found that it means A.I. that can generate new content. Then I smiled. I was at Wabash College as an undergraduate from 1971-1974 (I graduated in three years). Sometime during those years, Wabash acquired its first minicomputer, and I took a course in BASIC computer programming. I distinctly remember programming a template for a brief poem where at key locations I inserted a random word variable. Where the random word variable occurred, the program randomly selected from one of a number of rhyming words. So each time the program was run, a new rhyming poem would be “generated.” That was new content, and sometimes it was even amusing. But it wasn’t any good, and it did not have deep meaning, and if what it generated was true, it was only by accident. So I guess “the advent of generative A.I.” goes back at least to the early 1970s when Art Diamond messed around with a DEC.

This is not the main point of the passages quoted below. The main point is that the frontiers of human thought are not on the internet, and so cannot be part of the training of A.I. So whatever A.I. can do, it can’t think at the human “edge.”

(p. B3) Dan Shipper, the founder of the media start-up Every, says he gets asked a lot whether he thinks robots will replace writers. He swears they won’t, at least not at his company.

. . .

Mr. Shipper argues that the advent of generative A.I. is merely the latest step in a centuries-long technological march that has brought writers closer to their own ideas. Along the way, most typesetters and scriveners have been erased. But the part of writing that most requires humans remains intact: a perspective and taste, and A.I. can help form both even though it doesn’t have either on its own, he said.

“One example of a thing that journalists do that language models cannot is come and have this conversation with me,” Mr. Shipper said. “You’re going out and talking to people every day at the very edge of their experience. That’s always changing. And language models just don’t have access to that, because it’s not on the internet.”

For the full story see:

Benjamin Mullin. “Will Writing Survive A.I.? A Start-Up Is Betting on It.” The New York Times (Mon., May 26, 2025): B3.

(Note: ellipsis added.)

(Note: the online version of the story has the date May 21, 2025, and has the title “Will Writing Survive A.I.? This Media Company Is Betting on It.”)

If AI Takes Some Jobs, New Human Jobs Will Be Created

In the passage quoted below, Atkinson makes a sound general case for optimism on the effects of AI on the labor market. I would add to that case that many are currently overestimating the potential cognitive effectiveness of AI. Humans have a vast reservoir of unarticulated common sense knowledge that is not accessible to AI training. In addition AI cannot innovate at the frontiers of knowledge, not yet posted to the internet.

(p. A15) AI doomsayers frequently succumb to what economists call the “lump of labor” fallacy: the idea that there is a limited amount of work to be done, and if a job is eliminated, it’s gone for good. This fails to account for second-order effects, whereby the saving from increased productivity is recycled back into the economy in the form of higher wages, higher profits and reduced prices. This creates new demand that in turn creates new jobs. Some of these are entirely new occupations, such as “content creator assistant,” but others are existing jobs that are in higher demand now that people have more money to spend—for example, personal trainers.

Suppose an insurance firm uses AI to handle many of the customer-service functions that humans used to perform. Assume the technology allows the firm to do the same amount of work with 50% less labor. Some workers would lose their jobs, but lower labor costs would decrease insurance premiums. Customers would then be able to spend less money on insurance and more on other things, such as vacations, restaurants or gym memberships.

In other words, the savings don’t get stuffed under a mattress; they get spent, thereby creating more jobs.

For the full commentary, see:

Robert D. Atkinson. “No, AI Robots Won’t Take All Our Jobs.” The Wall Street Journal (Fri., June 6, 2025): A15.

(Note: the online version of the commentary has the date June 5, 2025, and has the same title as the print version.)

We Need to “Tolerate Heterodox Smart People” if We Want to Achieve Big Things

Peter Thiel is often quoted as having said many years ago that “We wanted flying cars, instead we got 140 characters” (as quoted in Lewis-Kraus 2024), a reference to the original limit to the length of a tweet on Twitter. The quotations below are all from the more recent Peter Thiel, who was having a conversation with NYT columnist Ross Douthat. He still believes that we are not boldly pursuing big goals, the only exception being A.I. Is the constraint that big goals are impossible to achieve, or do we lack people smart enough or motivated enough to pursue them, or do we regulate motivated smart people into discouraged despair?

(p. 9) One question we can frame is: Just how big a thing do I think A.I. is? And my stupid answer is: It’s more than a nothing burger, and it’s less than the total transformation of our society. My place holder is that it’s roughly on the scale of the internet in the late ’90s. I’m not sure it’s enough to really end the stagnation. It might be enough to create some great companies. And the internet added maybe a few percentage points to the G.D.P., maybe 1 percent to G.D.P. growth every year for 10, 15 years. It added some to productivity. So that’s roughly my place holder for A.I.

It’s the only thing we have. It’s a little bit unhealthy that it’s so unbalanced. This is the only thing we have. I’d like to have more multidimensional progress. I’d like us to be going to Mars. I’d like us to be having cures for dementia. If all we have is A.I., I will take it.

. . .

And so maybe the problems are unsolvable, which is the pessimistic view. Maybe there is no cure for dementia at all, and it’s a deeply unsolvable problem. There’s no cure for mortality. Maybe it’s an unsolvable problem.

Or maybe it’s these cultural things. So it’s not the individually smart person, but it’s how this fits into our society. Do we tolerate heterodox smart people? Maybe you need heterodox smart people to do crazy experiments.

. . .

I had a conversation with Elon a few weeks ago about this. He said we’re going to have a billion humanoid robots in the U.S. in 10 years. And I said: Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth, the growth will take care of this. And then — well, he’s still worried about the budget deficits. This doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it. But yeah, there’s some way in which these things are not quite thought through.

For the full interview, see:

Douthat, Ross. “Are We Dreaming Big Enough?” The New York Times, SundayOpinion Section (Sunday, June 29, 2025): 9.

(Note: ellipses added.)

(Note: the online version of the interview has the date June 26, 2025, and has the title “Peter Thiel and the Antichrist.”)

Peter Thiel’s yearning many years ago for flying cars was quoted more recently in:

Lewis-Kraus, Gideon. “Flight of Fancy.” The New Yorker, April 22, 2024, 28-39.

Lucian L. Leape Was Willing to Take the Ill-Will

In an earlier entry I presented Charlie Munger’s story where a hospital administrator had to be willing to absorb the ill-will, if he was to take the actions necessary to fix a badly malfunctioning department of the hospital. Another person willing to absorb the ill-will in order to reform medicine was Lucian L. Leape whose story is sketched in the passages quoted below.

(p. B21) Lucian L. Leape, a surgeon whose insights into medical mistakes in the 1990s gave rise to the field of patient safety, rankling much of the health care establishment in the process, died on Monday at his home in Lexington, Mass. He was 94.

. . .

In 1986, at age 56, Dr. Leape grew interested in health policy and spent a year at the RAND Corporation on a midcareer fellowship studying epidemiology, statistics and health policy.

Following his stint at RAND, he joined the team at Harvard conducting the Medical Practice Study. When Dr. Howard Hiatt, then the dean of the Harvard School of Public Health (now the Harvard T.H. Chan School of Public Health), offered Dr. Leape the opportunity to work on the study, “I accepted,” Dr. Leape wrote in his 2021 book, “Making Healthcare Safe: The Story of the Patient Safety Movement,” “not suspecting it would change my life.”

The most significant finding, Dr. Leape said in the 2015 interview, was that two-thirds of the injuries to patients were caused by errors that appeared to be preventable. “The implications were profound,” he said.

In 1994, Dr. Leape submitted a paper to The New England Journal of Medicine, laying out the extent to which preventable medical injury occurred and arguing for a shift of focus away from individuals and toward systems. But the paper was rejected. “I was told it didn’t meet their standards,” he recalled.

Dr. Leape sent the paper out again, this time to The Journal of the American Medical Association. Dr. George Lundberg, then the editor of JAMA, immediately recognized the importance of the topic, Dr. Leape said. “But he also knew it could offend many doctors. We didn’t talk about mistakes.”

Dr. Donald M. Berwick, president emeritus at the Institute for Healthcare Improvement in Boston and a longtime colleague of Dr. Leape’s, agreed. “To talk about error in medicine back then was considered rude,” he said in an interview in 2020. “Errors were what we call normalized. Bad things happen, and that’s just the way it is.”

“But then you had Lucian,” he added, “this quite different voice in the room saying, ‘No, this isn’t normal. And we can do something about it.’”

Dr. Leape’s paper, “Error in Medicine,” was the first major article on the topic in the general medical literature. The timing of publication, just before Christmas in 1994, Dr. Leape wrote in his 2021 book, was intentional. Dr. Lundberg knew it would receive little attention and therefore wouldn’t upset colleagues.

On Dec. 3, 1994, however, three weeks before the JAMA piece appeared, Betsy Lehman, a 39-year-old health care reporter for The Boston Globe, died after mistakenly receiving a fatal overdose of chemotherapy at the Dana-Farber Cancer Institute in Boston.

“Betsy’s death was a watershed event,” Dr. Leape said in a 2020 interview for a short documentary about Ms. Lehman.

The case drew national attention. An investigation into the death revealed that it wasn’t caused by one individual clinician, but by a series of errors involving multiple physicians and nurses who had misinterpreted a four-day regimen as a single dose, administering quadruple the prescribed amount.

The case made Dr. Leape’s point with tragic clarity: Ms. Lehman’s death, like so many others, resulted from a system that lacked sufficient safeguards to prevent the error.

. . .

Dr. Gawande said he believed it was the confidence Dr. Leape had acquired as a surgeon that girded him in the face of strong resistance from medical colleagues.

“He had enough arrogance to believe in himself and in what he was saying,” Dr. Gawande said. “He knew he was onto something important, and that he could bring the profession along, partly by goading the profession as much as anything.”

For the full obituary, see:

Katie Hafner. “Lucian L. Leape, 94, Who Put Patient Safety at Forefront, Is Dead.” The New York Times (Thursday, July 3, 2025): B21.

(Note: ellipses added.)

(Note: the online version of the obituary has the date July 1, 2025, and has the title “Lucian Leape, Whose Work Spurred Patient Safety in Medicine, Dies at 94.”)

Dr. Leape’s history of his efforts to increase healthcare safety can be found in:

Leape, Lucian L. Making Healthcare Safe: The Story of the Patient Safety Movement. Cham, Switzerland: Springer, 2021.

A.I. Only “Knows” What Has Been Published or Posted

A.I. “learns” by scouring language that has been published or posted. If outdated or never-true “facts” are posted on the web, A.I. may regurgitate them. It takes human eyes to check whether there really is a picnic table in a park.

(p. B1) Last week, I asked Google to help me plan my daughter’s birthday party by finding a park in Oakland, Calif., with picnic tables. The site generated a list of parks nearby, so I went to scout two of them out — only to find there were, in fact, no tables.

“I was just there,” I typed to Google. “I didn’t see wooden tables.”

Google acknowledged the mistake and produced another list, which again included one of the parks with no tables.

I repeated this experiment by asking Google to find an affordable carwash nearby. Google listed a service for $25, but when I arrived, a carwash cost $65.

I also asked Google to find a grocery store where I could buy an exotic pepper paste. Its list included a nearby Whole Foods, which didn’t carry the item.

For the full commentary see:

Brian X. Chen. “Underneath a New Way to Search, A Web of Wins and Imperfections.” The New York Times (Tues., June 3, 2025): B1 & B4.

(Note: the online version of the commentary has the date May 29, 2024, and has the title “Google Introduced a New Way to Use Search. Proceed With Caution.”)

How Did Ed Smylie and His Team Create the Kludge That Saved the Crew of Apollo 13?

Gary Klein in Seeing What Others Don’t analyzed cases of innovation, and sought their sources. One source he came up with was necessity. His compelling example was the firefighter Wag Dodge who, with maybe 60 seconds until he would be engulfed in flame, lit a match to the grass around him, and then laid down in the still-hot embers. The roaring fire bypassed the patch he pre-burned, and his life was saved. The story is well-told in Norman Maclean’s Young Men and Fire.

Pondering more cases of necessity might be useful to help us understand, and encourage, future innovation. One candidate might be the kludge that Ed Smylie and his engineers put together to save the Apollo 13 crew from suffocating after an explosion blew up their command capsule oxygen tank.

Necessity may be part of it, but cannot be the whole story. Humanity needed to fly for thousands of years, but it took Wilbur Wright to make it happen. (This point is made in Kevin Ashton’s fine and fun How to Fly a Horse.)

I have ordered the book co-authored by Lovell, and mentioned in a passage quoted below, in case it contains insight on how the Apollo 13 kludge was devised.

(p. B11) Ed Smylie, the NASA official who led a team of engineers that cobbled together an apparatus made of cardboard, plastic bags and duct tape that saved the Apollo 13 crew in 1970 after an explosion crippled the spacecraft as it sped toward the moon, died on April 21 [2025] in Crossville, Tenn. He was 95.

. . .

Soft-spoken, with an accent that revealed his Mississippi upbringing, Mr. Smylie was relaxing at home in Houston on the evening of April 13 when Mr. Lovell radioed mission control with his famous (and frequently misquoted) line: “Uh, Houston, we’ve had a problem.”

An oxygen tank had exploded, crippling the spacecraft’s command module.

Mr. Smylie, . . ., saw the news on television and called the crew systems office, according to the 1994 book “Lost Moon,” by Mr. Lovell and the journalist Jeffrey Kluger. The desk operator said the astronauts were retreating to the lunar excursion module, which was supposed to shuttle two crew members to the moon.

“I’m coming in,” Mr. Smylie said.

Mr. Smylie knew there was a problem with this plan: The lunar module was equipped to safely handle air flow for only two astronauts. Three humans would generate lethal levels of carbon dioxide.

To survive, the astronauts would somehow need to refresh the canisters of lithium hydroxide that would absorb the poisonous gases in the lunar excursion module. There were extra canisters in the command module, but they were square; the lunar module ones were round.

“You can’t put a square peg in a round hole, and that’s what we had,” Mr. Smylie said in the documentary “XIII” (2021).

He and about 60 other engineers had less than two days to invent a solution using materials already onboard the spacecraft.

. . .

In reality, the engineers printed a supply list of the equipment that was onboard. Their ingenious solution: an adapter made of two lithium hydroxide canisters from the command module, plastic bags used for garments, cardboard from the cover of the flight plan, a spacesuit hose and a roll of gray duct tape.

“If you’re a Southern boy, if it moves and it’s not supposed to, you use duct tape,” Mr. Smylie said in the documentary. “That’s where we were. We had duct tape, and we had to tape it in a way that we could hook the environmental control system hose to the command module canister.”

Mission control commanders provided step-by-step instructions to the astronauts for locating materials and building the adapter.

. . .

The adapter worked. The astronauts were able to breathe safely in the lunar module for two days as they awaited the appropriate trajectory to fly the hobbled command module home.

. . .

Mr. Smylie always played down his ingenuity and his role in saving the Apollo 13 crew.

“It was pretty straightforward, even though we got a lot of publicity for it and Nixon even mentioned our names,” he said in the oral history. “I said a mechanical engineering sophomore in college could have come up with it.”

For the full obituary, see:

Michael S. Rosenwald. “Ed Smylie Dies at 95; His Team of Engineers Saved Apollo 13 Crew.” The New York Times (Tuesday, May 20, 2025): B11.

(Note: ellipses, and bracketed year, added.)

(Note: the online version of the obituary was updated May 18, 2025, and has the title “Ed Smylie, Who Saved the Apollo 13 Crew With Duct Tape, Dies at 95.”)

Klein’s book that I praise in my introductory comments is:

Klein, Gary A. Seeing What Others Don’t: The Remarkable Ways We Gain Insights. Philadelphia, PA: PublicAffairs, 2013.

Maclean’s book that I praise in my introductory comments is:

Maclean, Norman. Young Men and Fire. new ed., Chicago: University of Chicago Press, 2017.

Ashton’s book that I praise in my introductory comments is:

Ashton, Kevin. How to Fly a Horse: The Secret History of Creation, Invention, and Discovery. New York: Doubleday, 2015.

The book co-authored by Lovell and mentioned above is:

Lovell, Jim, and Jeffrey Kluger. Lost Moon: The Perilous Voyage of Apollo 13. Boston, MA: Houghton Mifflin, 1994.