“Scientific Knowledge Can Lie Beyond Language”

By Arthur M. Diamond, Jr.
First Posted to The Institute of Arts and Ideas web site on Weds., March 25, 2026

An experienced nurse in the neonate intensive care unit is mostly focused on the infant who is her main responsibility. But she notices that a nearby infant is cycling through minor changes in skin color. That infant’s primary nurse sees it too. The infant then turns blue-black. The experienced nurse knows it is pneumopericardium, where air pressure around the heart keeps the heart from sending blood into the infant’s body. She knows it because she had been the nurse for an infant who died from pneumopericardium. The heart monitor misleadingly seems to show that the heart is still beating, so the infant’s primary nurse thinks the problem is a collapsed lung. As the chief doctor arrives the experienced nurse “slaps a syringe in his hand” and tells him to “stick the heart” to release the air. An x-ray tech confirms the diagnosis, the doctor acts, and the infant lives.

The experienced nurse was out of line. The infant who lived was not her responsibility and it was not her job to tell the chief doctor what to do. She could have been punished, but she took a chance and acted on intuitive knowledge that she could not immediately articulate. She had intuition that proved correct and her acting on her intuition was literally a matter of life and death. But we set up barriers to discourage ourselves and others from acting on our intuition. We establish regulations, credentials, protocols, manuals.

___

Breakthrough innovative entrepreneurs often have limited formal theoretical knowledge, but high levels of informal unarticulated knowledge.

___

Dr Min Chiu Li was an oncologist at the National Cancer Institute in the US in the early days of administering methotrexate chemotherapy to women who had choriocarcinoma cancer. The NCI protocol mandated that when the visible symptoms of cancer were gone, he should stop chemotherapy. But he had an intuition that small amounts of cancer still lurked after the visible symptoms were gone. So he violated the protocol and gave his patients a longer course of chemotherapy. The administrators at the NCI fired Min Chiu Li for violating the protocol, but later were surprised to observe that the patients treated according to the protocol were dead, and the patients treated by Min Chiu Li were alive.

In my book Openness to Creative Destruction I argue that breakthrough innovative entrepreneurs often have limited formal theoretical knowledge, but high levels of informal unarticulated knowledge. Henry Ford, Bill Gates, and Steve Jobs did not graduate from college. What is true of innovative entrepreneurs is also often true of innovative scientists. When James Watson and Francis Crick had lunch with famous biochemist Erwin Chargaff, Crick could not remember the well-known chemical details of the four bases of DNA. Chargaff dismissed the pair with contempt. Watson and Crick did not excel in the memorization of theory but had intuition that allowed them to see the double-helix structure of DNA. AI expert Melanie Mitchell and cognitive psychologist Gary Klein agree that we have more unarticulated knowledge than articulated knowledge.

But we too seldom ask: how useful is it? Or an even better question is: how useful could it be if we sought to make good use of it rather than sought to ignore or block it? Unarticulated knowledge deserves deeper study, and Klein is one of those who has made a start. Over his career he has modified his taxonomy of the types of unarticulated knowledge. What I am calling “unarticulated knowledge” he calls “tacit knowledge,” a label I prefer to reserve for the kind of muscle-memory bike-riding example the phrase’s originator, Michael Polanyi, made famous. In one of his later efforts, Klein distinguishes five types of unarticulated knowledge: Perceptual, Conceptual, Embodied, Social, and Metacognitive. The type I am most concerned with in this article is the Conceptual, within which he includes: “pattern recognition; mental models; expectancies; mindsets; noticing the absence of expected events; imagining antecedents and anticipating consequences; seeing affordances.”

The size and importance of unarticulated knowledge has implications for the current worries that the growth of AI will create widespread job loss. If worker productivity depends importantly on their unarticulated knowledge, and if AI models are trained solely on databases of articulated knowledge, then we have built-in limits on the extent to which AI can replace humans in the labor market.

The level of regulations in the US has steadily increased over many decades, at the same time that the number of breakthrough innovations has fallen (see also Graeber 2012; Huebner 2005). We may be wrong to rely so much on regulations, credentials, protocols, and manuals, but we do not do so out of simple stupidity or evil intent. We do so for several plausible reasons.

One of the reasons is because unarticulated knowledge is often called (including by Klein and me) “intuition” and we associate intuition with mysticism, knowing that mystics have often made predictions that proved false. We also know that our intuition is sometimes systematically biased in a variety of ways. Daniel Kahneman has many examples in his Thinking, Fast and Slow, including, for example, the anchoring effect, confirmation bias, and loss aversion.

But Klein thinks we sell ourselves short if we dismiss intuition. The intuition that he defends is based on experienced patterns, not mystical epiphany. This kind of intuition is on solid ground partly because it often can be articulated when we have enough time to do so, and when it is worth the time to do so.

In a life-and-death case, Lieutenant Commander Michael Riley on the British HMS Gloucester had about 90 seconds to decide if the object coming towards them on the radar was a friendly American plane or a hostile Silkworm missile. Riley was sure the object was hostile, and at the last second shot it down. He could not explain how he had known and why he was sure. By asking a series of shrewd probing questions, Klein teased out how Riley had known that the blip was hostile. Although the plane and the Silkworm flew at different altitudes, the radar did not directly report altitude. But experienced and focused users of the radar could infer altitude from the distance from the shore when a blip first became visible. In this case, Riley did not have the time to articulate the unarticulated, but later Klein, with Riley’s help, proved that it could be done.

Another reason we rely so much on regulations, credentials, protocols, and manuals is that we worry that we have no good way of judging other people’s claims to have unarticulated knowledge. So we worry that the unscrupulous might take advantage of us. This worry often arises in situations subject to what economists call “the principal-agent problem.” The problem arises when the principal pays the agent to do a task; then the agent takes the money but doesn’t do the task.

The principal-agent problem often exists even when we are dealing with articulated knowledge. An increasing number of scientific journal articles and grant proposals are fraudulent. The journals and the grant agencies are paying (in terms of resume entries and money grants) for bogus research. A prominent sad example is Alzheimer’s research. Charles Piller, a journalist at the distinguished journal Science, has expanded his exposé articles into the book Doctored, documenting that much of the leading research has been fraudulent, helping to explain why progress against this major disease has been so limited. The victims include first and foremost those suffering from Alzheimer’s, but also the taxpayers who fund government research grants, and the Alzheimer’s researchers whose honest but modest results have been rejected for publication and grants because they falsely seem inferior to the fraudulent results.

So we guard against unarticulated knowledge because we worry that if we can be so extensively defrauded when we are dealing with claims of articulated knowledge, how much more extensively will we be defrauded if we do not protect ourselves against claims of unarticulated knowledge?

The principal-agent problem is even more severe in common situations where the principal is acting as a fiduciary for others. So a government grant-giver has a moral duty to act prudently since he is acting as a fiduciary for the taxpayer. And a venture-capital fund investor has a moral duty to act prudently since he is acting as a fiduciary for the investors in his fund.

___

We should seek opportunities to fund on the basis of performance, not based on committee evaluation of written proposals.

___

This contrasts with angel investors who invest their own money and so can morally take greater risks based on more tenuous hunches. When the Omaha billionaire Walter Scott spoke to one of my classes I asked if he had been aware of the technological concerns George Gilder had about the Level 3 fiber optics network firm in which Scott had heavily invested. His somewhat gruff response was that he didn’t know technology, but he did know Jim Crowe, the founder of Level 3. Scott was spending his own money so he was not violating any fiduciary responsibility in mistakenly investing in Level 3.

When the principal and the agent are the same person, the principal-agent problem disappears. When the principal is spending their own money, the principal-agent problem is at least mitigated.

We can avoid the principal-agent problem by making it easier for entrepreneurs and scientists to self-fund their ventures and research, thus avoiding the principal-agent problem. For entrepreneurs this can be done by letting them keep the funds that they earn through successful entrepreneurship. Those who have given us the fullest proof of the value of their innovation by succeeding in the marketplace, are allowed to keep the wealth they thereby earn, so they can try it again. These are the serial innovative entrepreneurs like Commodore Vanderbilt, Steve Jobs, and Elon Musk. (The builders of a new computer in Tracy Kidder’s The Soul of a New Machine, compared what they were doing to the game of pinball, where the reward for doing it well is the chance to do it again.)

New York Times financial columnist Andrew Ross Sorkin wrote a column criticizing Steve Jobs for not signing onto the Bill Gates Foundation pledge to give the foundation a large part of his wealth. Jobs was famously known for his intuition about which new products would be “insanely great.” By retaining wealth from previous successes, he could quickly pivot to the next “insanely great” product as a new idea emerged, without having to articulate and sell the idea to a board of directors or to venture capitalists or to Wall Street. So we should encourage successful innovative entrepreneurs to reject the advice of Andrew Ross Sorkin, and instead hold onto their wealth. And we should oppose legislation being proposed in the US Senate to tax all substantial wealth, including that of deserving serial innovative entrepreneurs.

If a successful innovative entrepreneur runs out of new ideas himself, then rather than use his wealth for general charity, he should try to find and invest in other would-be-innovative entrepreneurs who share the traits that enabled the innovative entrepreneur’s own success. (PayPal entrepreneur Peter Thiel and Netscape entrepreneur Marc Andreesen are following this advice.)

We should seek opportunities to fund on the basis of performance, not based on committee evaluation of written proposals. George Stephenson had no formal education and was not very articulate. He could not give a good explanation of why the safety lamp he invented would prevent miners from dying of gas explosions. But he proved it by entering a mine with the lamp and walking toward a chamber known to contain gas. Later and more famously, Stephenson’s Rocket locomotive was not the sleekest looking in the Rainhill Trials contest, and Stephenson was not the most articulate defender of his entry, but unlike the other locomotives that in one way or another broke down, the Rocket kept chugging along. DARPA is one of the more successful government funders of new technology. They often fund based on contests. The X-prizes, founded by Peter Diamandis, are a private-sector effort to fund based on performance.

To reduce the principal-agent problem in science, we should be more open to citizen scientists self-funding their own research, as was commonly done in an earlier period of science, and as has recently been done by neuroscientist Jeff Hawkins, who earned his wealth by being the entrepreneur who developed the successful PalmPilot personal data assistant. The motto of the first scientific society, the Royal Society of England, was Nullius in verba (take no one’s word for it), meaning that anyone who was willing to show the evidence for their findings could participate in science. Make citizen science respectable again. Even today, not all successful innovative scientists rise through the Ivy league or through Oxford and Cambridge.

We should also experiment to find better ways to fund science where self-funding is not possible. We should consider Robin Hanson’s institutional innovation of a betting market where would-be scientists could bet on scientific propositions. Besides finding ways for would-be scientists to self-fund, we should find ways to reduce the amount of funds needed to participate. Universities could be made more efficient. The costs of entry to doing science in some disciplines is already low; citizen scientists make important contributions to astronomy, archeology, and botany. And the costs of contributing to science in other areas should be reduced by reducing regulations.

More broadly we can encourage managers at all levels to give decision rights to their employees. Assign them domains of action where they will not be micro-managed, where they can be alert to patterns and act on the patterns they observe, where they can make use of their unarticulated intuition. Within those domains the employee is not second-guessed by a micro-managing boss or a detailed operational manual.

BIBLIOGRAPHY (not posted in IAI online version):

Barber, Charles. In the Blood: How Two Outsiders Solved a Centuries-Old Medical Mystery and Took on the Us Army. New York: Grand Central Publishing, 2023.

Christensen, Clayton M., and Henry J. Eyring. The Innovative University: Changing the DNA of Higher Education from the inside Out. San Francisco, CA: Jossey-Bass, 2011.

Cowen, Tyler. The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better. New York: Dutton Adult, 2011.

DeVita, Vincent T., and Elizabeth DeVita-Raeburn. The Death of Cancer: After Fifty Years on the Front Lines of Medicine, a Pioneering Oncologist Reveals Why the War on Cancer Is Winnable–and How We Can Get There. New York: Sarah Crichton Books, 2015.

Diamandis, Peter H., and Steven Kotler. Bold: How to Go Big, Create Wealth and Impact the World. New York: Simon & Schuster, 2015.

Diamond, Arthur M., Jr. “How to Cure Cancer: Unbinding Entrepreneurs in Medicine.” Journal of Entrepreneurship and Public Policy 7, no. 1 (2018): 62–73.

Diamond, Arthur M., Jr. Openness to Creative Destruction: Sustaining Innovative Dynamism. New York: Oxford University Press, 2019.

Gigerenzer, Gerd. Gut Feelings: The Intelligence of the Unconscious. New York: Penguin Books, 2007.

Graeber, David. “Of Flying Cars and the Declining Rate of Profit.” The Baffler, no. 19 (2012). https://thebaffler.com/salvos/of-flying-cars-and-the-declining-rate-of-profit

Hanson, Robin. “Could Gambling Save Science? Encouraging an Honest Consensus.” Social Epistemology 9, no. 1 (Jan.-March 1995): 3–33.

Hawkins, Jeff. A Thousand Brains: A New Theory of Intelligence. New York: Basic Books, 2021.

Hawkins, Jeff, and Sandra Blakeslee. On Intelligence. New York: Times Books, 2004.

Huebner, Jonathan. “A Possible Declining Trend for Worldwide Innovation.” Technological Forecasting and Social Change 72, no. 8 (Oct. 2005): 980–86.

Jena, Anupam B., and Christopher M. Worsham. Random Acts of Medicine: The Hidden Forces That Sway Doctors, Impact Patients, and Shape Our Health. New York: Doubleday, 2023.

Jenkins, Tania M. Doctors’ Orders: The Making of Status Hierarchies in an Elite Profession. New York: Columbia University Press, 2020.

Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

Kidder, Tracy. The Soul of a New Machine. 1st ed. Boston: Little, Brown and Co., 1981.

Klein, Gary A. Sources of Power: How People Make Decisions. 20th Anniversary ed. Cambridge, MA: The MIT Press, [1997] 2017.

Klein, Gary A. “Unpacking Tacit Knowledge; Applying the Tacit Knowledge Concept More Effectively.” Psychology Today, 2023. https://www.psychologytoday.com/us/blog/seeing-what-others-dont/202307/unpacking-tacit-knowledge

Landes, David S. “Why Europe and the West? Why Not China?” Journal of Economic Perspectives 20, no. 2 (Spring 2006): 3–22.

McLaughlin, Patrick A., and Oliver Sherouse. “The Impact of Federal Regulation on the 50 States, 2016 Edition.” Arlington, VA: Mercatus Center, 2016.

Mitchell, Melanie. Melanie Mitchell on Artificial Intelligence. EconTalk, interviewed by Russ Roberts, Jan. 6. 2020. https://www.econtalk.org/melanie-mitchell-on-artificial-intelligence/

Park, Michael, Erin Leahey, and Russell J. Funk. “Papers and Patents Are Becoming Less Disruptive over Time.” Nature 613, no. 7942 (Jan. 2023): 138–44.

Piller, Charles. Doctored: Fraud, Arrogance, and Tragedy in the Quest to Cure Alzheimer’s. New York: Atria/One Signal Publishers, 2025.

Polanyi, Michael. Personal Knowledge: Towards a Post-Critical Philosophy. Chicago: University of Chicago Press, 1958.

Polanyi, Michael. The Tacit Dimension. Garden City, New York: Doubleday & Co., 1966.

Prentice, Claire. “Miracle at Coney Island: How a Sideshow Doctor Saved Thousands of Babies and Transformed American Medicine.” Kindle Single, 2016.

Richardson, Reese A. K., Spencer S. Hong, Jennifer A. Byrne, Thomas Stoeger, and Luís A. Nunes Amaral. “The Entities Enabling Scientific Fraud at Scale Are Large, Resilient, and Growing Rapidly.” Proceedings of the National Academy of Sciences 122, no. 32 (2025): e2420092122.

Rosen, William. The Most Powerful Idea in the World: A Story of Steam, Industry, and Invention. New York: Random House, 2010.

Smiles, Samuel. The Locomotive: George and Robert Stephenson. New and Revised ed, Lives of the Engineers. London: John Murray, Albemarle Street, 1879.

Sorkin, Andrew Ross. “Dealbook; the Mystery of Steve Jobs’s Public Giving.” The New York Times (Tues., Aug. 30, 2011): B1 & B4.

Watson, James D. The Double Helix: A Personal Account of the Discovery of the Structure of DNA. New York: Scribner Classics, [1968] 2011.

The article above was published behind a paywall on the web site of The Institute of Art and Ideas. I retain the copyright, so I am reposting the article here. I submitted a bibliography and internal parenthetical references, but following their usual formatting, they did not post those, but instead incorporated select web links to some of the sources. My submitted title was “Making the Most of Unarticulated Knowledge.” IAI did not like that title, so they chose: “Scientific Knowledge Can Lie Beyond Language.” I did not veto their title although I regretted that it neglected the practical implications of my article, which to me are as important as the scientific implications. The citation for the original posting of the article on IAI is:

Diamond, Arthur M. “Scientific Knowledge Can Lie Beyond Language.” Posted on March 25, 2026. The Institute of Arts and Ideas. Available from https://iai.tv/articles/scientific-knowledge-can-lie-beyond-language-auid-3530.

Frank Knight on the Leader of the V-Formation of Ducks

I write this on Thurs., Feb. 19, 2026. Yesterday evening, I was reading a section of Milton and Rose Friedman’s Free to Choose on the Negative Income Tax, as part of my revising a paper I have submitted to The Independent Review. As I was reading, I was surprised and numbly elated to serendipitously run across information that I had been seeking, off and on, literally for decades. Every so often I had occasion to tell a story that I was sure originated with Frank Knight. I wrote the script on Frank Knight for an audio series on Great Economists. (The current owners of the series refuse to pay me the royalties that I am owed, but that is another story.) So I thought I knew something about Knight, and own many books and articles by him. Every once in a while I spent an hour or so looking for the quotation, always failing. I even emailed Ross Emmett who many view as the current leading expert on Knight. He said he knew nothing of the quote I sought.

Buddhists who are totally at peace do not carry around with them the annoyance of unanswered questions, so if they run across an answer, it means nothing to them. Maybe this helps understand what Pasteur meant when he lectured that “chance favors only the prepared mind” (1854). The prepared mind carries around unanswered questions, unresolved contradictions, flaws in the world that could use improving. Then that mind stays alert for answers to the questions, resolutions to the contradictions, fixes for the flaws. The mind that pulls us forward is not a mind at peace.

[As an addendum, my discovery of the quote in Milton and Rose Friedman’s most famous book, after many searches in much more obscure places, reminds me of what Gertrude Himmelfarb said in a lecture at the U. of Chicago when I was a graduate student many decades ago. She searched the dusty archives long and hard, but the material most useful for her book on Harriet Taylor’s influence on Mill’s On Liberty, was hiding in plain sight in a volume written by F.A. Hayek on Mill’s correspondence with Taylor.]

Here after decades of occasional search and constant alertness, is the testimony of Milton and Rose, two former students of Frank Knight, showing that my memory of the Frank Knight duck V-formation story was not a dream or hallucination:

Our great and revered teacher Frank H. Knight was fond of illustrating different forms of leadership with ducks that fly in a V with a leader in front. Every now and then, he would say, the ducks behind the leader would veer off in a different direction while the leader continued flying ahead. When the leader looked around and saw that no one was following, he would rush to get in front of the V again. That is one form of leadership—undoubtedly the most prevalent form in Washington.

Source of Milton and Rose quote is:

Steve Lohr. “A.I. Is Poised to Put Midsize Cities on the Map.” The New York Times (Mon., December 30, 2025): B1-B2.

The Himmelfarb book mentioned in my initial comments is:

Himmelfarb, Gertrude. On Liberty and Liberalism: The Case of John Stuart Mill. New York: Knopf, 1974.

The Hayek book mentioned in my initial comments is:

Hayek, F.A. John Stuart Mill and Harriet Taylor: Their Friendship and Subsequent Marriage. London: Routledge & Kegan Paul, 1951. [Some citations to the book have the word “Correspondence” substituted for “Friendship.”]


Entrepreneurs Make Leaps: A Critique of the Theory of the Adjacent Possible (TAP)

In my Openness book, I argue that the innovative entrepreneur is a key agent of the innovative dynamism that brings us the new goods and the process innovations through which we flourish. The Theory of the Adjacent Possible, devised by Stuart Kauffman, Roger Koppl, and collaborators, and popularized by Steven Johnson, aims to “deflate” the innovative entrepreneur, and argues that technological progress is an inevitable result of a stochastic process. I have written an extended critique of the TAP, and have posted the latest version to the SSRN working paper archive. In some ways the working paper, especially the last half, can be viewed as further elaboration and illustration of some of the points made in Openness.

The citation for, and link to, my working paper is:

Diamond, Arthur M. “Entrepreneurs Make Leaps: A Critique of the Theory of the Adjacent Possible.” (Written Jan. 26, 2026; Posted Feb. 18, 2026). Available at SSRN: https://ssrn.com/abstract=6166326

My book mentioned in my initial comments is:

Diamond, Arthur M., Jr. Openness to Creative Destruction: Sustaining Innovative Dynamism. New York: Oxford University Press, 2019.


The Dutch Give Citizen Scientists Property Rights to the Fossils They Find

Holland has significant claim, along with England, to being a strong and early bastion of freedom. So it is fitting that today Holland’s institutions provide a sanctuary for the practice of citizen science. The article below notes that Dutch law gives citizen scientists property rights in the fossils they find. This gives them an incentive to seek fossils AND it gives them an incentive to share information about what they find. (If they did not have such property rights, they would have an incentive to hide the fossils so they would not be seized.)

Dick Moll is an entrepreneur, using some of his fossils as part of a Historyland theme park. His doing good through creative funding, reminds me of Martin Couney, who financed baby incubators for poor families, by displaying the incubators at theme park exhibits.

If academic scientists, instead of hiding behind their credentials, sought clever ways to recruit the eyes of curious citizen scientists, we could learn much more and learn it much more quickly. This would be easier if the values and methods of science were more empirical, more true to Galileo. Let everyone have a look in the telescope.

(p. C1) After scouring a beach in the harbor all morning in Rotterdam, the Netherlands, a retired Dutch engineer, Cock van den Berg, had finally found something interesting: a polished black stone about the size of an acorn with two punctures, like finger holes in a bowling ball.

He held it out in the palm of his hand to show Dick Mol, an expert on ice age fossils.

“What do you think?” he asked. “Is it a mammoth tooth?”

Mol examined it for about 30 seconds and decided it was not. It was a molar from a prehistoric rhinoceros, he said.

. . .

(p. C6) Under Dutch law, beach combers who find fossils on Maasvlakte 2 are not required to report or submit them. They can take their finds home if they like, but they are encouraged to promote scientific research by voluntarily registering them with the Naturalis Biodiversity Center, a national natural history museum and research center in the city of Leiden.

Using a website built by the port of Rotterdam authority and managed by Naturalis, amateur paleontologists can submit a photo and the GPS location of the find so that experts can help them identify it.

“In other countries, like Germany, fossils or anything related to paleontology are protected by the state, but that’s not the case in the Netherlands,” explained Isaak Eijkelboom, a Ph.D. student in paleontology at Naturalis who studies fossils found at Maasvlakte 2 and other locations.

But since trophy hunters don’t have to worry about losing their finds, he thinks they’re more likely to share their discoveries with the museum and collaborate with scientists.

“It allows us to practice citizen science,” Eijkelboom said.

For more than a decade, Naturalis has been using volunteers to gather information for its fossil database, which now lists more than 23,000 finds, he said.

“This is only possible because it’s so open, and so free,” Eijkelboom said. “In other places, when people find fossils, they end up in their closets and the knowledge is hidden away.”

Van den Berg, who discovered the rhinoceros molar, said he was excited to share it with Naturalis. A few years ago, he found a jaw part from a macaque monkey at Maasvlakte 2 and donated it to the Natural History Museum in Rotterdam. The rare specimen, which scientists dated to 125,000 years ago, was described in three scientific papers, Mol said.

. . .

Mol joked that the “biggest mistake of van den Berg’s life” was donating the monkey jaw to the museum and not to Mol’s “Mammoth Lab” at Historyland, a museum and theme park that he helped establish in the town of Hellevoetsluis, about a 15-minute drive from the beach.

There, Mol, a retired airport customs official, has his own impressive collection of 55,000 ice age fossils. An autodidact who never attended university, Mol is nonetheless widely recognized as an international expert; in 2000, he was knighted in the Netherlands for his significant contributions to paleontology, and he was featured in Discovery Channel documentaries such as “Raising the Mammoth” and “Land of the Mammoth.”

. . .

In spite of a steady stream of beachcombers, Eijkelboom said there will still be plenty more fossils to find for a long time to come.

“In general, in paleontology, a lot of people say we’ve only discovered the tip of the iceberg,” he said. Rising sea levels will require continued fortifications of the Dutch coastline, using North Sea sand deposits for quite some time to come, he added.

Although it is unfortunate that such action is needed to prevent humans from going extinct like the mammoth, he said, “at least there will be more and more beaches where we can hunt for ice age fossils.”

For the full story see:

Nina Siegal and Ilvy Njiokiktjien. “On the Hunt for Mammoths.” The New York Times (Weds., November 19, 2025): C1 & C6.

(Note: ellipses added.)

(Note: the online version of the story has the date Nov. 17, 2025, and has the title “A Day at the Beach Hunting Mammoths.”)

FDA Worked Better and Much Cheaper Before 1962 Expansion

Before 1962, the FDA regulated for drug safety, but not for drug efficacy. If the FDA returned to regulating only for safety, that would imply that Phase 3 randomized clinical trials would no longer be mandated. Phase 3 trials are usually more expensive than the Phase 1 and Phase 2 trials combined. They cost a lot more, and usually take a lot longer. If the FDA no longer mandate Phase 3 trials we will have more drug innovation, more quickly, and have much lower costs. And we will have more freedom.

(p. A13) From 1938 through 1962, the Food and Drug Administration required proof of safety before drug approval but not proof of efficacy. The approach was abandoned due to a significant misunderstanding of the thalidomide tragedy—when thousands of babies outside the U.S. were born with severe birth defects.

The issue with thalidomide was a failure of safety, not efficacy. But under pressure to react, Congress required, through the Kefauver-Harris Amendments of 1962, proof of efficacy before granting marketing approval. The new rule addressed a problem that didn’t exist and, in doing so, imposed a substantial new cost burden.

Before 1962, developing a drug took about two years. Now it takes 12 to 14 years. Since 1975 real development costs have risen about 7.5% a year, roughly doubling every decade. Today, we estimate that bringing one successful drug to market costs about $9 billion on average.

For the full commentary, see:

Charles L. Hooper and Solomon S. Steiner. “Deregulation Can Make Medications Cheaper.” The Wall Street Journal (Sat., Oct. 18, 2025): A13.

(Note: the online version of the commentary has the date Oct. 17, 2025, and has the same title as the print version.)

Adjuvants Did Not Arise from Theory, but from Open-Eyed Trial-And-Error Experimentation

Sometimes you see journalists, commentators, or politicians saying that ordinary people should not use trial-and-error experiments with health treatments, but instead listen to the advice of certified scientists. Listen to the “science” we hear. But many of the most common practices in medicine originated with ordinary trial-and-error experiments of the sort that can be conducted with little if any certified expertise.

Consider adjuvants. An adjuvant “helps” the primary therapy; aluminum can be an adjuvant to a vaccine or, with cancer, radiation can be an adjuvant to a surgery. As the passages quoted below show, the first vaccine adjuvants were not discovered through the theorizing of a certified genius. A motivated alert and practical veterinarian wanted to protect horses from disease. He noticed that a horse vaccine worked better when, by chance, the horse also had an infection at the vaccination site. He speculated that the inflammation from the infection aroused the immune system. So why not try deliberately causing inflammation? He tried different substances, landing on tapioca as the best of what he tried. Others later found aluminum to be more reliable.

Maybe what often matters most for medical progress is a sense of open-eyed urgency and a persistent willingness to engage in trial-and-error experimentation. The uncertified can have those traits. When they do, we should not ridicule, ban, or cancel them.

(p. A14) The origins of added aluminum in vaccines can be traced back nearly a century. In a stable on the outskirts of Paris, a young veterinarian had made a peculiar discovery: mixing tapioca into his horses’ diphtheria vaccines made them more effective.

The doctor, Gaston Ramon, had noticed that the horses who developed a minor infection at the injection site had much more robust immunity against diphtheria. He theorized that adding something to his shots that caused inflammation — ingredients he later named adjuvants, derived from the Latin root “to help” — helped induce a stronger immune response.

After testing several candidates — including bread crumbs, petroleum jelly and rubber latex — he found success with a tapioca-laced injection, which produced slight swelling and far more antibodies.

Tapioca never caught on as an adjuvant. But in 1932, a few years after Dr. Ramon’s studies were published, the United States began including aluminum salts in diphtheria immunizations, as they were found to invoke a similar but more reliable effect.

Today, aluminum adjuvants are found in 27 routine vaccines, and nearly half of those recommended for children under 5.

This extra boost of immunity is not needed in all types of vaccines. Shots that contain a weakened form of a virus, like the measles mumps and rubella shot, or created with mRNA technology, like the Pfizer and Moderna Covid-19 vaccines, generate strong enough immune responses on their own.

But in vaccines that contain only small fragments of the pathogen, which would garner little attention from the immune system, adjuvants help stimulate a stronger response, allowing vaccines to be given in fewer doses.

Scientists believe that aluminum salts work in two ways. First, aluminum binds to the core component of the vaccine and causes it to diffuse into the bloodstream more slowly, giving immune cells more time to build a response.

It’s also thought that aluminum operates more directly, enhancing the activity of certain immune cells, though this mechanism is not fully understood.

For the full story see:

Teddy Rosenbluth. “Aluminum in Vaccines Is a Good Thing, Scientists Say.” The New York Times (Sat., January 25, 2025): A14.

(Note: ellipsis, and bracketed date, added.)

(Note: the online version of the story has the date Jan. 24, 2025, and has the title “Yes, Some Vaccines Contain Aluminum. That’s a Good Thing.”)

“Nothing Is Incontrovertible in Science”

Somewhere we should start a Hall of Fame for those who had the courage to take the ill will from the enforcers of the “new religion” of global warming. Among its honorees would be Michael Crichton, Freeman Dyson, and (see below) Ivar Giaever. Science is not a body of doctrine; science is a process of inquiry.

(p. B12) Ivar Giaever might not have won the Nobel Prize in Physics if a job recruiter at General Electric had known the difference between the educational grading systems of the United States and Norway.

It was 1956, and he was applying for a position at the General Electric Research Laboratory in Schenectady, N.Y. The interviewer looked at his grades, from the Norwegian Institute of Technology in Trondheim, where Dr. Giaever had studied mechanical engineering, and was impressed: The young applicant had scored 4.0 marks in math and physics. The recruiter congratulated him.

But what the recruiter didn’t know was that in Norway, the best grade was a 1.0, not a 4.0, the top grade in American schools. In fact, a 4.0 in Norway was barely passing — something like a D on American report cards. In reality, his academic record in Norway had been anything but impressive.

He did not want to be dishonest, Dr. Giaever (pronounced JAY-ver) would say in recounting the episode with some amusement over the years, but he also did not correct the interviewer. He got the job.

He proceeded to spend the next 32 years at the laboratory, along the way developing an experiment that provided proof of a central idea in quantum physics — that subatomic particles can behave like powerful waves.

. . .

Though Dr. Giaever later earned a doctorate in theoretical physics, in 1964, from Rensselaer Polytechnic Institute in Troy, N.Y., he had not yet completed that degree when he came up with the experiment that would earn him his share of the Nobel. Indeed, as he admitted in his Nobel lecture, he did not fully understand the ideas behind the experiment when he first started working on it. He was, after all, a mechanical engineer, steeped in how things work in classical physics, which deals with real-world objects. Quantum physics, on the other hand, predicts what happens in the weird subatomic world.

. . .

Dr. Giaever prided himself on his common-sense approach to science, but not all his ideas were welcomed by his peers. He became a prominent denier of climate change, referring to the science around it as a “new religion.” (“I would say that, basically, global warming is a nonproblem,” he said in a 2015 speech.) He based his opposition, in part, on his belief that it is impossible to track changes in the Earth’s temperature and that, even if it could be done, the temperature changes would be insignificant.

When the American Physical Society announced in 2011 that the evidence for climate change and global warming was incontrovertible, he resigned from the society in disgust, saying: “‘Incontrovertible’ is not a scientific word. Nothing is incontrovertible in science.”

For the full obituary, see:

Dylan Loeb McClain. “Ivar Giaever, 96, ‘D’ Student Who Won Nobel Prize.” The New York Times (Thursday, July 10, 2025): B12.

(Note: ellipses added.)

(Note: the online version of the obituary was updated July 9, 2025, and has the title “Ivar Giaever, Nobel Winner in Quantum Physics, Dies at 96.”)

Latest “So-Called Reasoning Systems” Hallucinate MORE Than Earlier A.I. Systems

Since more sophisticated “reasoning” A.I. systems are increasingly inaccurate on the facts, it is unlikely that such systems will threaten any job where job performance depends on getting the facts right. Wouldn’t that include most jobs? The article quoted below suggests it would most clearly include jobs working with “court documents, medical information or sensitive business data.”

(p. B1) The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.

Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what (p. B6) is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.

. . .

The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information.

Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data.

“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.”

. . .

For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.

The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.

When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.

. . .

For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots.

So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.

For the full story see:

Cade Metz and Karen Weise. “A.I. Hallucinations Are Getting Worse.” The New York Times (Fri., May 9, 2025): B1 & B6.

(Note: ellipses added.)

(Note: the online version of the story was updated May 6, 2025, and has the title “A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse.”)

A.I. Only “Knows” What Has Been Published or Posted

A.I. “learns” by scouring language that has been published or posted. If outdated or never-true “facts” are posted on the web, A.I. may regurgitate them. It takes human eyes to check whether there really is a picnic table in a park.

(p. B1) Last week, I asked Google to help me plan my daughter’s birthday party by finding a park in Oakland, Calif., with picnic tables. The site generated a list of parks nearby, so I went to scout two of them out — only to find there were, in fact, no tables.

“I was just there,” I typed to Google. “I didn’t see wooden tables.”

Google acknowledged the mistake and produced another list, which again included one of the parks with no tables.

I repeated this experiment by asking Google to find an affordable carwash nearby. Google listed a service for $25, but when I arrived, a carwash cost $65.

I also asked Google to find a grocery store where I could buy an exotic pepper paste. Its list included a nearby Whole Foods, which didn’t carry the item.

For the full commentary see:

Brian X. Chen. “Underneath a New Way to Search, A Web of Wins and Imperfections.” The New York Times (Tues., June 3, 2025): B1 & B4.

(Note: the online version of the commentary has the date May 29, 2024, and has the title “Google Introduced a New Way to Use Search. Proceed With Caution.”)

A.I. Hastens Search for Antibiotic Peptides in Extinct Species

In an earlier entry I commented on the use of A.I. to seek antibodies by George Church’s startup Lila. Now it appears that César de la Fuente is employing a similar approach. In both cases A.I. is being used to more efficiently do repetitive well-structured tasks. This is not the highest creative level of human intelligence, but it can free time for humans to exercise the highest level of human intelligence.

(p. A3) Buried in the DNA of the long extinct woolly mammoth is a compound that scientists hope will one day yield a lifesaving antibiotic.

In experiments, mammuthusin, as the compound is called, has eradicated superbugs—bacteria that are resistant to today’s antibiotics and cause infections that are hard to treat—says César de la Fuente, the bioengineer who helped discover the molecule.

. . .

To help combat superbugs, doctors say we need new antibiotics with novel chemical structures or mechanisms of action. But only a handful of such drugs has entered the market over the past several decades.

De la Fuente is banking on artificial intelligence to help end this dry spell. He and his collaborators have built deep-learning algorithms to comb through enormous genetic databases to find peptides, or protein fragments, that have antibacterial properties. They have used this method to analyze animal venoms, the human microbiome and archaea, an underexplored group of microorganisms. They have also mined the genetic codes from fossils of long-extinct animals and humans, including Neanderthals and Denisovans. “This deep-learning model has opened a window into the past,” de la Fuente says.

. . .

When the algorithms identify a new peptide with antibiotic potential, de la Fuente and his team use robots to manufacture the compound in their lab and then test it in mice infected with bacteria. So far, a few hundred peptides made in de la Fuente’s lab have safely and effectively cured sick mice.

One of them was mammuthusin, identified in the genetic code of Mammuthus primigenius, a species of mammoth that last roamed the Earth about 4,000 years ago. The researchers discovered the peptide after mining a National Center for Biotechnology Information database of DNA sequencing data obtained from the fossils of extinct animals. In experiments, mammuthusin was as potent as polymyxin B, an antibiotic often used as a last resort for serious infections, according to a paper published in the journal Nature in June [2024]. The mammoth peptide effectively eradicated a type of bacterium that the World Health Organization has designated a critical pathogen because of its resistance to many common antibiotics.

For the full story, see:

Dominique Mosbergen. “Search for New Antibiotics Turns Back Time.” The Wall Street Journal (Weds., May 28, 2025): A3.

(Note: ellipses, and bracketed year, added.)

(Note: the online version of the story has the date May 24, 2025, and has the title “A Search for New Antibiotics in Ancient DNA.” In the original of both the online and print versions, Mammuthus primigenius appeared in italics.)

The academic article published in Nature Biomedical Engineering in June 2024, and mentioned above, is:

Wan, Fangping, Marcelo D. T. Torres, Jacqueline Peng, and Cesar de la Fuente-Nunez. “Deep-Learning-Enabled Antibiotic Discovery through Molecular De-Extinction.” Nature Biomedical Engineering 8, no. 7 (July 2024): 854-71.

My Email Response to George Church on A.I. and Longevity

On May 17 I ran an entry commenting on George Church’s over-optimism about the use of A.I. to replicate the scientific method, and expressed wistful disappointment that Church’s longevity project had not advanced as quickly as 60 Minutes implied it would in 2019.

On May 20, Church sent me a cordial email disputing some of what I wrote in my entry. I responded to him on May 22, and asked him if he would mind if I ran his email and my response on my blog. He never responded to that request, so I will not reproduce his email here. But I see no harm in my including below the links he sent me. And then I will follow with my email response to him.

Here are the links that Church thought I should ponder:

2024 pmc.ncbi.nlm.nih.gov/articles/PMC10909732 (see Fig 1b)
2022 rejuvenatebio.com/animal-health-pipeline
2022 rejuvenatebio.com/pipeline
2023 biorxiv.org/content/10.1101/2023.11.13.566787v1.full

Here is my email response to Church:

Dear Prof. Church,

Thank you for taking the time to read and respond to my blog post. I appreciate the links you sent. The first link gives us the good news of progress toward increasing the lifespan of mice and in reducing their frailty, which could be interpreted as one part of reversing their aging. The fourth link also gives good news of the proof-of-concept of a new factor at the cell level that may be able to rejuvenate cells without the cancer of the Yamanaka factors.

But on 60 Minutes in 2019 you said age reversal was already “available to mice.” And you said the “veterinary product might be a couple years away and then that takes another 10 years to get through the human clinical trials.” That is not exactly a promise, but it does sound like a hopeful prediction. And I will admit that the timing matters to me. If your 60 Minutes prediction was right there’s a good chance I might live to see it; if it takes twice that long, I almost certainly will not.

In re-reading my post, I see a couple of revisions I would make. I would add that I wish you well in what you are trying to do, and strongly and sincerely hope that you succeed (whether through A.I or by other means). And I would add that I believe Elon Musk said that being overly optimistic is one way that great innovators push themselves toward great goals.

I appreciate your “fact checking” offer. I have a comment apropos that. You say that “The Lohr article doesn’t say “feeding” or “literature”. “ Here is the relevant exact quote from the Lohr article:

Lila has taken a science-focused approach to training its generative A.I., feeding it research papers, documented experiments and data from its fast-growing life science and materials science lab. That, the Lila team believes, will give the technology both depth in science and wide-ranging abilities, mirroring the way chatbots can write poetry and computer code.

So the Lohr article does say “feeding.” It doesn’t say “literature,” but does say “research papers” which I take to be the same thing. I appreciate that Lila also is collecting new data. But is it some generative intelligence in Lila that is identifying the new data to seek or is it George Church and his team?

I agree that A.I. can help crank through possibilities that have already been defined. I am dubious that A.I. can come up with the possibilities as well as George Church and his team can. It may seem harmless that A.I. is being over-hyped. But as an economist it is my job to notice that funding is scarce, so funding spent on A.I. is funding not spent on other inputs to science.

I fear that I may come across as a privileged spectator complaining about the bloodied combatant in the arena. But a big part of my research is aimed at reducing the regulations that burden medical entrepreneurs. For instance, I am working on a paper supporting Milton Friedman’s suggestion that the F.D.A. should just regulate for safety and stop regulating for efficacy. Without Phase 3, more can be tried, more quickly and more cheaply.

If you are willing, I would like to paste your response (or an edited version if you prefer) at the end of my original post. Let me know if that is OK.

Thanks!

Art