To End Drug Shortages Make Healthcare a Free Market

Drug shortages are sometimes blamed on the free market. A bum rap. In a free market when supply declines or demand increases, prices rise, and the increase in price incentivizes a greater quantity supplied, eventually ending a short-run period where quantity demanded at the going price exceeds quantity supplied at the going price (in other words, a shortage). But healthcare in America is far from a free market. Every aspect is highly regulated. Prices are negotiated, often by middlemen called (Pharmacy Benefit Managers, aka PBMs), entry is not free, and the demanders (patients) often do not know (or care) about the prices, since they are paid by a third party (insurers, employers, or the government). Perverse incentives abound.

(p. A26) There’s been a bombardment of bad news for drug supplies. The American Society of Health-System Pharmacists found this summer that nearly all of the members it surveyed were experiencing drug shortages, which generally affect half a million Americans. Cancer patients have scrambled as supplies of chemotherapy drugs dwindle. Other shortages include antibiotics for treatable diseases, such as the only drug recommended for use during pregnancy to prevent congenital syphilis (a disease that is 11 times more common today than a decade ago), and A.D.H.D. medications, without which people struggle to function in their day-to-day lives. The toll on Americans is heavy.

Over half of the shortages documented this summer by health consulting firm IQVIA had persisted for more than two years. But even though drug shortages affect millions of Americans, policymakers and industry leaders have provided little to no long-term relief for people in need.

Shortages have occurred regularly since at least the early 2000s, when national tracking began. Hundreds of drugs, in every major therapeutic category, have been unavailable for some period. The average drug shortage lasts about 1.5 years. Even when substitute medications are available, they may be suboptimal (for example, deaths by septic shock rose by 10 percent during a 2011 shortage of the first-line medication, norepinephrine) or have spillover effects (such as possibly increasing the risk of antimicrobial resistance). In addition to harming patients, shortages have cost health systems billions of dollars in increased labor and substitute medications.

. . .

Large hospital chains can readily monitor shortage risks and preemptively place large orders. This panic buying can wipe out inventory, and leave hospitals with fewer resources strapped since they may get notice of a drug shortage only when it’s too late. There is little penalty for over-ordering because unused drugs can often be returned.

. . .

Addressing the underlying fragility of our essential drug supply will take structural change and investments. While all industries must grapple with how to build resilient supply chains, the pharmaceutical industry is unique. The people who are most affected by supply chain vulnerabilities — patients — are also those with least say in the choice to buy from reliable manufacturers. When people buy cars, they may pay more based on company reputation, ratings by outside testers and reviews from other customers. In contrast, patients bear the harm of drug shortages, yet they cannot choose the manufacturers of their essential drugs nor evaluate their reliability.

For the full commentary see:

Emily Tucker. “We’re Stuck in a Constant Cycle of Drug Shortages.” The New York Times (Thursday, December 7, 2023 [sic]): A26.

(Note: ellipses added.)

(Note: the online version of the commentary has the date Dec. 6, 2023 [sic], and has the title “America Is Having Yet Another Drug Shortage. Here’s Why It Keeps Happening.”)

Regulations Discourage Search for Magic Bullet Cures

The so-called “Inflation Reduction Act” mandates that several of the biggest blockbuster drugs must have prices negotiated between Medicare and Pharma firms. As the commentary quoted below suggests, this creates an incentive for Pharma firms to develop many middling drugs rather than a couple of blockbuster drugs. Paul Ehrlich’s “magic bullet” may be impossible, but we will never know if no-one is trying to discover it.creates an

(p. B10) A true home run in the drug industry is when a company develops a mega-blockbuster that transforms its finances for years.

But with Medicare trying to bring costs down by targeting the industry’s most expensive drugs, a portfolio of medium-size moneymakers that can keep your name off the U.S. government’s naughty list can be a wise strategy.

That is at least one reason why big pharma is investing heavily in biotech companies developing antibody-drug conjugates. Known as ADCs, these treatments work like a guided missile by pairing antibodies with toxic agents to fight cancer. In short, they enable a more targeted form of chemotherapy that goes straight into the cancer cells while minimizing harm to healthy cells.

. . .

One reason most ADCs aren’t likely to become mega-blockbusters like Keytruda, a cancer immunotherapy that has earned 35 approvals across 16 types of cancer, is that they aren’t one-size-fits-all drugs. Instead, they are designed to target a specific protein that is expressed on the surface of a cancer cell. That means that each drug is made with an antibody targeting a subset of cancer. There are more than 100 ADCs being tested in humans by pharma and biotech companies.

For the full commentary see:

David Wainer. “Heard on the Street; Drug Industry’s Secret Weapon: ‘Guided Missiles’.” The Wall Street Journal (Friday, Oct. 27, 2023 [sic]): B10.

(Note: ellipsis added.)

(Note: the online version of the commentary has the date October 26, 2023 [sic], and has the title “Heard on the Street; ‘Guided Missile Drugs’ Could Be Big Pharma’s Secret Weapon.”)

Effective Therapies Will Remain Banned When F.D.A. Mandates Costly Evidence of Long-Term Clinical Benefits, Rather than Frugal Evidence of Short-Term Biomarkers

How many therapies that would have cured diseases, or extended lives, or limited side effects or pain, are not available because their champions cannot afford the often astronomical costs of Phase 1, Phase 2, and Phase 3 clinical trials? Nobel-Prize-winning economist Milton Friedman favored eliminating the F.D.A., but as a more politically palatable step-in-the-right-direction, favored limiting F.D.A. mandates to approving safety through Phase 1 and Phase 2 clinical trials (and no longer mandating proving efficacy through Phase 3 clinical trials, which usually cost much more than Phase 1 and Phase 2 clinical trials, combined). Perhaps an even more politically palatable, but tinier, step-in-the-right-direction is proposed in the commentary quoted below. This modest step would allow in Phase 3 clinical trials the use of less costly biomarker “surrogate end-points” in place of far more costly clinical end-points, such as years of added life. In the case discussed in the article quoted below, the surrogate end-point was the percent of arginine in the patient’s blood.

(p. A17) Discovering treatments for rare diseases is a daunting task. Recruiting even a few dozen people for a clinical trial requires doctors and drug companies to identify a large share of the patient population. And since the market for such therapies is necessarily small, it’s nearly impossible to attract investment. So when news emerged about Aeglea BioTherapeutics’ ARG1-D therapy pegzilarginase, we could hardly believe it. Pegzilarginase is an enzyme engineered to lower the body’s levels of arginine. The randomized placebo-controlled study of pegzilarginase included 32 patients with ARG1-D.

The results speak for themselves. The amount of arginine present in blood plasma declined by 80% for patients on pegzilarginase. After only six months, 90.5% of patients who received pegzilarginase had normal arginine levels, and this was sustained over time. The data also suggested progressive improvements in motor function compared with a placebo. And most patients tolerated the therapy quite well.

These numbers were jaw-dropping. Which is why the FDA’s decision is incomprehensible.

The FDA even refused to look at Aeglea’s data. Instead, the agency demanded that the firm compile additional data suggesting pegzilarginase will produce a clinical benefit in addition to eliminating excess arginine. But for ARG1-D and other rare diseases, measuring clinical outcomes can take years, while measuring biomarkers likely to produce clinical benefits can take weeks.

. . .

Evaluating clinical benefits could force sick patients to remain in placebo groups for months. That the FDA would put its rigid rules before the convincing data we already have is unethical. If the FDA doesn’t correct its error soon, patients with ARG1-D will lose their best chance at full, productive lives.

For the full commentary see:

Stephen Cederbaum and Emil Kakkis. “The FDA’s See-No-Data Approach.” The Wall Street Journal (Wednesday, Sept. 27, 2023 [sic]): A17.

(Note: ellipsis added.)

(Note: the online version of the commentary has the date September 26, 2023 [sic], and has the same title as the print version.)

Medicare Rewards Health Insurers for Overestimating Future Prescription-Drug Costs

I believe that the perverse incentives that Medicare creates for insurers, as described in the 2019 article quoted below, still exist. But I need to confirm my belief.

(p. A1) Each June, health insurers send the government detailed cost forecasts for providing prescription-drug benefits to more than 40 million people on Medicare.

No one expects the estimates to be spot on. After all, it is a tall order to predict the exact drug spending for the following year of the thousands of members in each plan.

However, year after year, most of those estimates have turned out to be wrong in the particular way that, thanks to Medicare’s arcane payment rules, results in more revenue for the health insurers, a Wall Street Journal investigation has found. As a consequence, the insurers kept $9.1 billion more in taxpayer funds than they would have had their estimates been accurate from 2006 to 2015, according to Medicare data obtained by the Journal.

Those payments have largely been hidden from view since Medicare’s prescription-drug program was launched more than a decade ago, and are an example of how the secrecy of the $3.5 trillion U.S. health-care system promotes and obscures higher spending.

Medicare’s prescription-drug benefit, called Part D, was designed to help hold down drug costs by having insurers manage the coverage efficiently. Instead, Part D spending has accelerated (p. A12) faster than all other components of Medicare in recent years, rising 49% from $62.9 billion in 2010 to $93.8 billion in 2017. Medicare experts say the program’s design is contributing to that increase. Total spending for Part D from 2006 to 2015 was about $652 billion.

The cornerstone of Part D is a system in which private insurers such as CVS Health Corp., UnitedHealth Group Inc. and Humana Inc. submit “bids” estimating how much it will cost them to provide the benefit. The bids include their own profits and administrative costs for each plan. Then Medicare uses the estimates to make monthly payments to the plans.

After the year ends, Medicare compares the plans’ bids to the actual spending. If the insurer overestimated its costs, it pockets a chunk of the extra money it received from Medicare—sometimes all of it—and this can often translate into more profit for the insurer, in addition to the profit built into the approved bid. If the extra money is greater than 5% of the insurer’s original bid, it has to pay some of it back to Medicare.

For instance, in 2015, insurers overestimated costs by about $2.2 billion, and kept about $1.06 billion of it after paying back $1.1 billion to the government, according to the data reviewed by the Journal.

. . .

If those big insurers were aiming to submit accurate bids, the probability that they would have overestimated costs so frequently and by such a large amount is less than one in one million, according to a statistical analysis done for the Journal by researchers at Memorial Sloan Kettering Cancer Center, who study pharmaceutical pricing and reimbursement.

Insurance companies use heaps of data to predict future spending. If truly unpredictable events were blowing up their statistical models, the proportion of overestimates to underestimates would be closer to 50/50, says Peter Bach, director of Sloan Kettering’s Center for Health Policy and Outcomes, which conducted the statistical analysis.

“Even expert dart throwers don’t hit the bull’s-eye every time. But their misses are spread around in every direction,” says Dr. Bach. “If they start missing in one particular direction over and over they are doing it on purpose.”

For the full story see:

Joseph Walker and Christopher Weaver. “Medicare Overpaid Insurers Billions.” The Wall Street Journal (Saturday, Jan. 5, 2019 [sic]): A1 & A12.

(Note: ellipsis added.)

(Note: the online version of the story has the date Jan. 4, 2019 [sic], and has the title “The $9 Billion Upcharge: How Insurers Kept Extra Cash from Medicare.”)

Time Constraints for Tenure, Promotion, and Funding Decisions Lead Academic Biologists to Over-Study Already-Studied Genes

George Stigler argued that when most economists were self-funded business practitioners economics was more applied and empirical, while after most economists were academics funded by endowments or the government economics became less applied and more formal. [In a quick search I failed to identify the article where Stigler says this–sorry.] A similar point was made to science more broadly by Terence Kealey in his thought-provoking The Economic Laws of Scientific Research. The article quoted below argues persuasively that research on human genes is aligned with the career survival goals of academics, rather than with either the faster advance of science or the quicker cure of diseases like cancer. The alignment could be improved if more of research funding came from a variety of private sources.

(p. D3) In a study published Tuesday [Sept. 18, 2018] in PLOS Biology, researchers at Northwestern University reported that of our 20,000 protein-coding genes, about 5,400 have never been the subject of a single dedicated paper.

Most of our other genes have been almost as badly neglected, the subjects of minor investigation at best. A tiny fraction — 2,000 of them — have hogged most of the attention, the focus of 90 percent of the scientific studies published in recent years.

A number of factors are largely responsible for this wild imbalance, and they say a lot about how scientists approach science.

. . .

It was possible, . . ., that scientists were rationally focusing attention only on the genes that matter most. Perhaps they only studied the genes involved in cancer and other diseases.

That was not the case, it turned out. “There are lots of genes that are important for cancer, but only a small subset of them are being studied,” said Dr. Amaral.

. . .

A long history helps, . . . . The genes that are intensively studied now tend to be the ones that were discovered long ago.

Some 16 percent of all human genes were identified by 1991. Those genes were the subjects of about half of all genetic research published in 2015.

One reason is that the longer scientists study a gene, the easier it gets, noted Thomas Stoeger, a post-doctoral researcher at Northwestern and a co-author of the new report.

“People who study these genes have a head start over scientists who have to make tools to study other genes,” he said.

That head start may make all the difference in the scramble to publish research and land a job. Graduate students who investigated the least studied genes were much less likely to become a principal investigators later in their careers, the new study found.

“All the rewards are set up for you to study what has been well-studied,” Dr. Amaral said.

“With the Human Genome Project, we thought everything was going to change,” he added. “And what our analysis shows is pretty much nothing changed.”

If these trends continue as they have for decades, the human genome will remain a terra incognito for a long time. At this rate, it would take a century or longer for scientists to publish at least one paper on every one of our 20,000 genes.

That slow pace of discovery may well stymie advances in medicine, Dr. Amaral said. “We keep looking at the same genes as targets for our drugs. We are ignoring the vast majority of the genome,” he said.

Scientists won’t change their ways without a major shift in how science gets done, he added. “I can’t believe the system can move in that direction by itself,” he said.

Dr. Stoeger argued that the scientific community should recognize that a researcher who studies the least known genes may need extra time to get results.

“People who do something new need some protection,” he said.

For the full commentary see:

Carl Zimmer. “Matter; The Problem With DNA Research.” The New York Times (Tuesday, September 25, 2018 [sic]): D3.

(Note: ellipses, and bracketed date, added.)

(Note: the online version of the commentary has the date Sept. 18, 2018 [sic], and has the title “Matter; Why Your DNA Is Still Uncharted Territory.” Where there are differences in wording between the versions, the passages quoted above follow the online version.)

The paper in PLOS Biology co-authored by Thomas Stoeger and mentioned above is:

Stoeger, Thomas, Martin Gerlach, Richard I. Morimoto, and Luís A. Nunes Amaral. “Large-Scale Investigation of the Reasons Why Potentially Important Genes Are Ignored.” PLOS Biology 16, no. 9 (2018): e2006643.

Kealey’s book, praised above, is:

Kealey, Terence. The Economic Laws of Scientific Research. New York: St. Martin’s Press, 1996.

Ozempic Profits Poured into Massive Supercomputer Meant to Power AI for Future Drug Development

I think AI is currently being oversold. But I am very ignorant and could be wrong, so I favor a diversity of privately-funded bets on what will work to bring us future breakthrough innovations.

(p. B2) Two of the world’s most important companies are now in a partnership born from the success of their most revolutionary products. The supercomputer was built with technology from Nvidia—and money from the Novo Nordisk Foundation. The charitable organization has become supremely wealthy as the largest shareholder in Novo Nordisk, which means this project was made possible by the breakthrough drugs that have sent the Danish company’s stock price soaring.

To put it another way, it’s the first AI supercomputer funded by Ozempic.

It was named Gefion after the goddess of Norse mythology who turned her sons into oxen so they could plow the land that would become Denmark’s largest island.

. . .

Whatever you call it, Gefion is a beast. It is bigger than a basketball court. It weighs more than 30 tons. It took six months to manufacture and install. It also required an investment of $100 million.

. . .

When it’s fully operational, the AI supercomputer will be available to entrepreneurs, academics and scientists inside companies like Novo Nordisk, which stands to benefit from its help with drug discovery, protein design and digital biology.

For the full commentary see:

Ben Cohen. “It’s a Giant New Supercomputer That Might Transform an Entire Country.” The Wall Street Journal (Saturday, Nov. 2, 2024): B2.

(Note: ellipses added.)

(Note: the online version of the commentary has the date November 1, 2024, and has the title “Science of Success; The Giant Supercomputer Built to Transform an Entire Country—and Paid For by Ozempic.”)

“Most Published Research Findings Are False”

(p. 10) How much of biomedical research is actually wrong? John Ioannidis, an epidemiologist and health-policy researcher at Stanford, was among the first to sound the alarm with a 2005 article in the journal PLOS Medicine. He showed that small sample sizes and bias in study design were chronic problems in the field and served to grossly overestimate positive results. His dramatic bottom line was that “most published research findings are false.”

The problem is especially acute in laboratory studies with animals, in which scientists often use just a few animals and fail to select them randomly. Such errors inevitably introduce bias. Large-scale human studies, of the sort used in drug testing, are less likely to be compromised in this way, but they have their own failings: It’s tempting for scientists (like everyone else) (p. C2) to see what they want to see in their findings, and data may be cherry-picked or massaged to arrive at a desired conclusion.

A paper published in February [2017] in the journal PLOS One by Estelle Dumas-Mallet and colleagues at the University of Bordeaux tracked 156 biomedical studies that had been the subject of stories in major English-language newspapers. Follow-up studies, they showed, overturned half of those initial positive results (though such disconfirmation rarely got follow-up news coverage). The studies dealt with a wide range of issues, including the biology of attention-deficit hyperactivity disorder, new breast-cancer susceptibility genes, a reported link between pesticide exposure and Parkinson’s disease, and the role of a virus in autism.

Reviews by pharmaceutical companies have delivered equally grim numbers. In 2011, scientists at Bayer published a paper in the journal Nature Reviews Drug Discovery showing that they could replicate only 25% of the findings of various studies. The following year, C. Glenn Begley, the head of cancer research at Amgen, reported in the journal Nature that he and his colleagues could reproduce only six of 53 seemingly promising studies, even after enlisting help from some of the original scientists.

With millions of dollars on the line, industry scientists overseeing clinical trials with human subjects have a stronger incentive to follow high standards. Such studies are often designed in cooperation with the U.S. Food and Drug Administration, which ultimately reviews the findings. Still, most clinical trials produce disappointing results, often because the lab studies on which they are based were themselves flawed.

For the full essay see:

Harris, Richard. “Dismal Science In the Search for Cures.” The Wall Street Journal (Saturday, April 8, 2017 [sic]): C1-C2.

(Note: bracketed year added.)

(Note: the online version of the essay was updated April 7, 2017 [sic], and has the title “The Breakdown in Biomedical Research.”)

The essay quoted above is adapted from Mr. Harris’s book:

Harris, Richard. Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions. New York: Basic Books, 2017.

The 2005 paper by Ioannidis mentioned above is:

Ioannidis, John P. A. “Why Most Published Research Findings Are False.” PLoS Medicine 2, no. 8 (2005): 696-701.

“I’m Sick of It, I’m Leaving” Are First Words of Children in Primitive Village Routinely Eating Grubs and Starch Tasting Like “Gummy Mucous”

(p. C9) As she tended soldiers during the Crimean War, a British nurse found herself appalled by the wretched, vermin-infested conditions at the army’s hospital in Istanbul. She began collecting figures showing the devastating effects of the filth and the dramatic benefits of the sanitary improvements she implemented. Her presentation on the need for cleaner care facilities, published in 1858, led to reforms that ultimately saved millions of lives and increased life expectancy in the U.K. Florence Nightingale, it turns out, was a pioneering data scientist.

Data, when used to reveal the value of hospital hygiene or the harm of tobacco smoke, can be a vital force for good, as Tim Harford reminds us in “The Data Detective.”

. . .

Imprecise and inconsistent definitions are one source of confusion.  . . .  . . . “infant mortality,” a key data point for public health, varies depending on the specific time in fetal development when the line is drawn between a miscarriage and a tragically premature birth.

. . .

To learn from data, it’s essential to present it well. For her analysis after the Crimean War, Florence Nightingale created one of the first infographics, using shrewdly designed diagrams to tell a memorable story. From the outset, she regarded visually compelling data displays as indispensable to making her arguments.

. . .

An authentically open mind can make a difference, Mr. Harford says, noting that the top forecasters tend to be not experts but earnest learners who constantly take in new data while challenging and refining their hypotheses. Data, Mr. Harford concludes, can illuminate and inform as well as distract and deceive. It’s often maddeningly hard to know the difference, but it would be unforgivable not to try.

For the full review see:

Wade Davis. “To Hear a Dying Tongue.” The Wall Street Journal (Saturday, Aug. 10, 2019 [sic]): C9.

(Note: ellipses added.)

(Note: the online version of the review has the date Aug. 9, 2019 [sic], and has the title “‘A Death in the Rainforest’ Review: To Hear a Dying Tongue.”)

The book under review is:

Kulick, Don. A Death in the Rainforest: How a Language and a Way of Life Came to an End in Papua New Guinea. Chapel Hill, NC: Algonquin Books, 2019.

Rationality-Defender Stigler Saw Voting as Irrational, but Did It Anyway

Nobel Prize winner George Stigler contributed to the Public Choice literature and was a staunch defender of rationality. One example would be his paper with Gary Becker, “De Gustibus Non Est Disputandum.” One popular, much discussed conclusion of some public choice theorists is that it is irrational to vote. The argument goes that the marginal effect of one vote is almost always miniscule, so the expected benefit to the voter is equally miniscule. On the other hand, the time and effort it takes to vote are always more than miniscule. So the expected costs of voting exceed the expected benefits. Ergo it is irrational to vote. When I was a graduate student, taking courses in philosophy and economics, and for a couple of years as a post-doctoral fellow, I frequently stopped by the office of the Journal of Political Economy where Stigler was an editor. I believe it was there that I heard Stigler, definitely on an election day, say “Here I go to do something irrational.”

Stigler is well-known for his humorous biting comments. These could be tough on others. But this story shows that they also could be directed at himself.

I do not know if anyone has fully solved the paradox of the irrationality of voting. I guess you would have to say something about how the effects of all good people ceasing to vote would be far from marginal and far from good.

I once mentioned to distinguished Public Choice theorist Dwight Lee that a positive result of the personal benefits of voting being miniscule to a voter, is that the voter was freed from voting their personal narrow self-interest, and could vote their conscience about what served the general good. (Maybe something like what Rawls hoped for behind his “veil of ignorance” in A Theory of Justice.) I believe that Dwight told me that he already published a paper that expressed this positive result, but I never took the time to look for that paper.

Milton Friedman Bubbled with Energy as He Grabbed His Sunday New York Times

During my first year in graduate school at the University of Chicago, I lived in a dorm for graduate students that had been built with money from John D. Rockefeller. It was next to a several story apartment tower that I had heard was built by Milton Friedman who owned and lived in the top apartment. On Sunday mornings, on more than one occasion, I remember Friedman used to bounce down the hallway of International House and go up to the mail counter, which always had a pile of The Sunday New York Times for sale. He would buy one, and bounce back down the hallway. Friedman was curious, energetic, optimistic, and engaged in the broad world of policy. A libertarian who wants to move the intellectual consensus, benefits from reading The New York Times.