“Most Published Research Findings Are False”

(p. 10) How much of biomedical research is actually wrong? John Ioannidis, an epidemiologist and health-policy researcher at Stanford, was among the first to sound the alarm with a 2005 article in the journal PLOS Medicine. He showed that small sample sizes and bias in study design were chronic problems in the field and served to grossly overestimate positive results. His dramatic bottom line was that “most published research findings are false.”

The problem is especially acute in laboratory studies with animals, in which scientists often use just a few animals and fail to select them randomly. Such errors inevitably introduce bias. Large-scale human studies, of the sort used in drug testing, are less likely to be compromised in this way, but they have their own failings: It’s tempting for scientists (like everyone else) (p. C2) to see what they want to see in their findings, and data may be cherry-picked or massaged to arrive at a desired conclusion.

A paper published in February [2017] in the journal PLOS One by Estelle Dumas-Mallet and colleagues at the University of Bordeaux tracked 156 biomedical studies that had been the subject of stories in major English-language newspapers. Follow-up studies, they showed, overturned half of those initial positive results (though such disconfirmation rarely got follow-up news coverage). The studies dealt with a wide range of issues, including the biology of attention-deficit hyperactivity disorder, new breast-cancer susceptibility genes, a reported link between pesticide exposure and Parkinson’s disease, and the role of a virus in autism.

Reviews by pharmaceutical companies have delivered equally grim numbers. In 2011, scientists at Bayer published a paper in the journal Nature Reviews Drug Discovery showing that they could replicate only 25% of the findings of various studies. The following year, C. Glenn Begley, the head of cancer research at Amgen, reported in the journal Nature that he and his colleagues could reproduce only six of 53 seemingly promising studies, even after enlisting help from some of the original scientists.

With millions of dollars on the line, industry scientists overseeing clinical trials with human subjects have a stronger incentive to follow high standards. Such studies are often designed in cooperation with the U.S. Food and Drug Administration, which ultimately reviews the findings. Still, most clinical trials produce disappointing results, often because the lab studies on which they are based were themselves flawed.

For the full essay see:

Harris, Richard. “Dismal Science In the Search for Cures.” The Wall Street Journal (Saturday, April 8, 2017 [sic]): C1-C2.

(Note: bracketed year added.)

(Note: the online version of the essay was updated April 7, 2017 [sic], and has the title “The Breakdown in Biomedical Research.”)

The essay quoted above is adapted from Mr. Harris’s book:

Harris, Richard. Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions. New York: Basic Books, 2017.

The 2005 paper by Ioannidis mentioned above is:

Ioannidis, John P. A. “Why Most Published Research Findings Are False.” PLoS Medicine 2, no. 8 (2005): 696-701.

Policy Reform, Such as Smaller Research Teams, Needed for Faster Big Breakthroughs

(p. D3) Miracle vaccines. Videophones in our pockets. Reusable rockets. Our technological bounty and its related blur of scientific progress seem undeniable and unsurpassed. Yet analysts now report that the overall pace of real breakthroughs has fallen dramatically over the past almost three-quarters of a century.

This month in the journal Nature, the report’s researchers told how their study of millions of scientific papers and patents shows that investigators and inventors have made relatively few breakthroughs and innovations compared with the world’s growing mountain of science and technology research. The three analysts found a steady drop from 1945 through 2010 in disruptive finds as a share of the booming venture, suggesting that scientists today are more likely to push ahead incrementally than to make intellectual leaps.

“We should be in a golden age of new discoveries and innovations,” said Michael Park, an author of the paper and a doctoral candidate in entrepreneurship and strategic management at the University of Minnesota.

. . .

The new method looks at citations more deeply to separate everyday work from true breakthroughs more effectively. It tallies citations not only to the analyzed piece of research but to the previous studies it cites. It turns out that the previous work is cited far more often if the finding is routine rather than groundbreaking. The analytic method turns that difference into a new lens on the scientific enterprise.

The measure is called the CD index after its scale, which goes from consolidating to disrupting the body of existing knowledge.

Dr. Funk, who helped to devise the CD index, said the new study was so computationally intense that the team at times used supercomputers to crunch the millions of data sets. “It took a month or so,” he said. “This kind of thing wasn’t possible a decade ago. It’s just now coming within reach.”

The novel technique has aided other investigators, such as Dr. Wang. In 2019, he and his colleagues reported that small teams are more innovative than large ones. The finding was timely because science teams over the decades have shifted in makeup to ever-larger groups of collaborators.

In an interview, James A. Evans, a University of Chicago sociologist who was a co-author of that paper with Dr. Wang, called the new method elegant. “It came up with something important,” he said. Its application to science as a whole, he added, suggests not only a drop in the return on investment but a growing need for policy reform.

“We have extremely ordered science,” Dr. Evans said. “We bet with confidence on where we invest our money. But we’re not betting on fundamentally new things that have the potential to be disruptive. This paper suggests we need a little less order and a bit more chaos.”

For the full story see:

William J. Broad. “What Happened to All of Science’s Big Breakthroughs?” The New York Times (Tuesday, January 24, 2023 [sic]): D3.

(Note: ellipses added.)

(Note: the online version of the story has the date Jan. 17, 2023 [sic], and has the same title as the print version.)

For Nature paper mostly discussed in the passages quoted above is:

Park, Michael, Erin Leahey, and Russell J. Funk. “Papers and Patents Are Becoming Less Disruptive over Time.” Nature 613, no. 7942 (Jan. 2023): 138-44.

The paper on team size, and co-authored by Wang, is:

Wu, Lingfei, Dashun Wang, and James A. Evans. “Large Teams Develop and Small Teams Disrupt Science and Technology.” Nature 566, no. 7744 (Feb. 2019): 378-82.

Funding People Instead of Projects Allows Researchers to Nimbly Pivot in the Light of Unexpected Discoveries

(p. A2) Patrick Collison, the Irish-born co-founder of payments technology company Stripe Inc., has spent a lot of the past five years pondering the problem of declining scientific productivity.

. . .

Clearly, scientific productivity has something to do with how research is done, not how much. One culprit, in the view of Mr. Collison and many others, is that the institutions that fund science have become process-oriented, narrow-minded and risk-averse. Wary of failure, they favor established researchers pursuing narrowly focused, incremental ideas over younger scientists with more heterodox agendas.

. . .

Yet Mr. Collison criticizes the federal government for failing to bring a much deeper and eager pool of talent to bear on a multitude of pandemic challenges. Top virologists “were stuck on hold, waiting for decisions about whether they could repurpose their existing funding for this exponentially growing catastrophe,” he wrote in an essay last year with George Mason University economist Tyler Cowen, and University of California, Berkeley bioengineering professor Patrick Hsu.

Sensing a need, the three in April, 2020 launched Fast Grants, $10,000 to $500,000 awards funded primarily by private donors and approved in 14 days or less.

. . .

When Messrs. Collison, Cowen and Tsu surveyed their recipients about their experiences with traditional funding, 57% told them they spent more than a quarter of their time on grant applications and 78% said they would change their research program a lot if they weren’t constrained in how they spent their current funding.

This reinforces a key insight from metascience, also known as the science of science, namely the value of curiosity-driven research. Heidi Williams, an economist at Stanford University and director of science policy at the Institute for Progress, said grants typically commit a scholar to complete a specific project, even if during the research the project proves less promising than expected.

. . .

In a 2009 paper, Massachusetts Institute of Technology economist Pierre Azoulay and his co-authors demonstrated the benefits of funding people over projects. Researchers backed by the Howard Hughes Medical Institute, which takes such an approach, produce far more widely cited papers—a metric of significance—than similar researchers funded by the National Institutes of Health. Drawing on those lessons, last year, Mr. Collison co-founded the Arc Institute to pre-fund scientists studying complex human diseases for renewable eight-year terms.

For the full commentary, see:

Greg Ip. “CAPITAL ACCOUNT; To Boost Growth, Rethink Science Funding.” The Wall Street Journal (Friday, Nov. 18, 2022): A2.

(Note: ellipses added.)

(Note: the online version of the commentary has the date November 17, 2022, and has the title “CAPITAL ACCOUNT; Stagnant Scientific Productivity Holding Back Growth.”)

The published version of Azoulay’s co-authored 2009 NBER working paper, mentioned above, is:

Azoulay, Pierre, Joshua S. Graff Zivin, and Gustavo Manso. “Incentives and Creativity: Evidence from the Academic Life Sciences.” RAND Journal of Economics 42, no. 3 (Fall 2011): 527-54.

Inflation of the Co-Authorship Bubble

CoauthorInflationGraph2015-10-30.jpg Source of graphic: online version of the WSJ article quoted and cited below.

(p. A1) . . . , there has been a notable spike since 2009 in the number of technical reports whose author (p. A10) counts exceeded 1,000 people, according to the Thomson Reuters Web of Science, which analyzed citation data. In the ever-expanding universe of credit where credit is apparently due, the practice has become so widespread that some scientists now joke that they measure their collaborators in bulk–by the “kilo-author.”

Earlier this year, a paper on rare particle decay published in Nature listed so many co-authors–about 2,700–that the journal announced it wouldn’t have room for them all in its print editions. And it isn’t just physics. In 2003, it took 272 scientists to write up the findings of the first complete human genome–a milestone in biology–but this past June, it took 1,014 co-authors to document a minor gene sequence called the Muller F element in the fruit fly.
. . .
More than vanity is at stake. Credit on a peer-reviewed research article weighs heavily in hiring, promotion and tenure decisions. “Authorship has become such a big issue because evaluations are performed based on the number of papers people have authored,” said Dr. Larivière.
. . .
Michigan State University mathematician Jack Hetherington published a paper in 1975 on low temperature physics in Physical Review Letters with F.D.C. Willard. His colleagues only discovered that his co-author was a siamese cat several years later when Dr. Hetherington started handing out copies of the paper signed with a paw print.
In the same spirit, Shalosh B. Ekhad at Rutgers University so far has published 32 peer-reviewed papers in scientific journals with his co-author Doron Zeilberger. It turns out that Shalosh B. Ekhad is Hebrew for the model number of a personal computer used by Dr. Zeilberger. “The computer helps so much and so often,” Dr. Zeilberger said.
Not everyone takes such pranks lightly.
Immunologist Polly Matzinger at the National Institute of Allergy and Infectious Diseases named her dog, Galadriel Mirkwood, as a co-author on a paper she submitted to the Journal of Experimental Medicine. “What amazed me was that the paper went through the entire editorial process and nobody noticed,” Dr. Matzinger said. When the journal editor realized he had published work crediting an Afghan hound, he was furious, she recalled.
Physicists may be more open-minded. Sir Andre Geim, winner of the 2010 Nobel Prize in Physics, credited H.A.M.S. ter Tisha as his co-author of a 2001 paper published in the journal Physica B. Those journal editors didn’t bat an eye when his co-author was unmasked as a pet hamster. “Not a harmful joke,” said Physica editor Reyer Jochemsen at the Leiden University in the Netherlands.
“Physicists apparently, even journal editors, have a better sense of humor than the life sciences,” said Dr. Geim at the U.K.’s University of Manchester.

For the full story, see:
ROBERT LEE HOTZ. “Scientists Observe Odd Phenomenon of Multiplying Co-Authors.”The Wall Street Journal (Mon., Aug. 10, 2015): A1 & A10.
(Note: ellipses added.)
(Note: the online version of the story has the title “How Many Scientists Does It Take to Write a Paper? Apparently, Thousands.”)

In 20th Century, Inventions Had Cultural Impact Twice as Fast as in 19th Century

NgramGraphTechnologies2013-12-08.png I used Google’s Ngram tool to generate the Ngram above, using the same technologies used in the Ngram that appeared in the print (but not the online) version of the article quoted and cited below. The blue line is “railroad”; the red line is “radio”; the green line is “television”; the orange line is “internet.” The search was case-insensitive. The print (but not the online) version of the article quoted and cited below, includes a caption that describes the Ngram tool: “A Google tool, the Ngram Viewer, allows anyone to chart the use of words and phrases in millions of books back to the year 1500. By measuring historical shifts in language, the tool offers a quantitative approach to understanding human history.”

(p. 3) Today, the Ngram Viewer contains words taken from about 7.5 million books, representing an estimated 6 percent of all books ever published. Academic researchers can tap into the data to conduct rigorous studies of linguistic shifts across decades or centuries. . . .
The system can also conduct quantitative checks on popular perceptions.
Consider our current notion that we live in a time when technology is evolving faster than ever. Mr. Aiden and Mr. Michel tested this belief by comparing the dates of invention of 147 technologies with the rates at which those innovations spread through English texts. They found that early 19th-century inventions, for instance, took 65 years to begin making a cultural impact, while turn-of-the-20th-century innovations took only 26 years. Their conclusion: the time it takes for society to learn about an invention has been shrinking by about 2.5 years every decade.
“You see it very quantitatively, going back centuries, the increasing speed with which technology is adopted,” Mr. Aiden says.
Still, they caution armchair linguists that the Ngram Viewer is a scientific tool whose results can be misinterpreted.
Witness a simple two-gram query for “fax machine.” Their book describes how the fax seems to pop up, “almost instantaneously, in the 1980s, soaring immediately to peak popularity.” But the machine was actually invented in the 1840s, the book reports. Back then it was called the “telefax.”
Certain concepts may persevere, even as the names for technologies change to suit the lexicon of their time.

For the full story, see:
NATASHA SINGER. “TECHNOPHORIA; In a Scoreboard of Words, a Cultural Guide.” The New York Times, SundayBusiness Section (Sun., December 8, 2013): 3.
(Note: ellipsis added; bold in original.)
(Note: the online version of the article has the date December 7, 2013.)

“Web Links Were Like Citations in a Scholarly Article”

(p. 17) Page, a child of academia, understood that web links were like citations in a scholarly article. It was widely recognized that you could identify which papers were really important without reading them– simply tally up how many other papers cited them in notes and bibliographies. Page believed that this principle could also work with web pages. But getting the right data would be difficult. Web pages made their outgoing links transparent: built into the code were easily identifiable markers for the destinations you could travel to with a mouse click from that page. But it wasn’t obvious at all what linked to a page. To find that out, you’d have to somehow collect a database of links that connected to some other page. Then you’d go backward.

Source:
Levy, Steven. In the Plex: How Google Thinks, Works, and Shapes Our Lives. New York: Simon & Schuster, 2011.

When Bibliometrics Are a Matter of Life and Death

(p. 51) . . . it is essential, if at all possible, to have a go-to physician expert and authority when one has a newly diagnosed, serious condition, such as a brain or, neurologic conditions like multiple sclerosis and Parkinson’s disease, heart valve abnormality. How do you find that individual doctor?
In order to leverage the Internet and gain access to state-of-the-art expertise, you need to identify the physician who conducts the leading research in the field. Let’s pick pancreatic cancer as an example of a serious condition that often proves to be rapidly fatal. The first step is to go to Google Scholar and find the top-cited articles for that condition by typing in “pancreatic cancer.” They are generally listed in order by descending number of citations. Look for the senior, last author of the articles. The last author of the top-listed paper in the Journal of Clinical Oncology from 1997 is Daniel D. Von Hoff, with over 2,000 citations (“cited by … ” appears at the end of each hit). Now you may have identified an expert. Enter “Daniel Von Hoff” into PubMed (www.ncbi.nlm.nih.gov/sites/pubmed) to see how many papers he has published: 567. Most are related to pancreatic cancer or cancer research.
(p. 52) Now go back to Google Scholar and enter his name, and you’ll see over 24,000 hits–this number includes papers that cite his work. There are some problems with these websites, since getting citations by other peer-reviewed publications takes time; if a breakthrough paper is published, it will be years to accumulate hundreds, if not thousands, of citations. Thus, the lag time or incubation phase of citations may result in missing a rising star. If it is a common name, there may be admixture of citations of different researchers with the same name, albeit different topics, so it is useful to enter in all elements including the middle initial and to scan the topic list to alleviate that problem. For perspective, a paper that has been cited 1,000 times by others is rare and would be considered a classic. In this example, the top paper by Von Hoff in 1997 is a long time ago, and he is no longer at the University of Texas, San Antonio-he moved to Phoenix, Arizona. How would you find that out? Look for Daniel D. Von Hoff using a search engine such as Google or Bing, and look up his profile on Wikipedia. Without any help from any doctor, you will have found the country’s leading authority on pancreatic cancer. And you will have also identified some backups at Johns Hopkins using the same methodology.

Source:
Topol, Eric. The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Health Care. New York: Basic Books, 2012.
(Note: initial ellipsis added; parenthetical ellipsis in original.)