Lives Lost Due to Peer Review Delays

(p. A25) In this age of instant information, medicine remains anchored in the practice of releasing new knowledge at a deliberate pace. It’s time for medical scientists to think differently about how quickly they alert the public to breakthrough findings.
Last week the National Institutes of Health announced that it had prematurely ended a large national study of how best to treat people with high blood pressure because of its exceptional results.
In this trial of more than 9,000 people age 50 and older with high blood pressure, an aggressive treatment strategy to keep systolic blood pressure below 120 was compared with a conventional one aimed at keeping it below 140. The subjects all had a high risk of heart attacks, stroke and heart failure. The N.I.H. concluded, six years into a planned eight-year study, that for these patients, pushing blood pressure down far below currently recommended levels was very beneficial.
. . .

The new information may justify a more vigorous strategy for treating blood pressure, but for now doctors and patients have been left with incomplete results, some headlines and considerable uncertainty about whether to modify current treatments.
Medicine needs to change its approach to releasing new, important information. Throughout science we are seeing more rapid modes of communication. The traditional approach was not to publish until everything was finalized and ready to be chiseled in stone. But these sorts of delays are unnecessary with the Internet. Moreover, although all the trial data has yet to be tabulated, an analysis was considered sufficiently definitive to lead independent experts to stop the multimillion-dollar study.
We believe that when there is such strong evidence for a major public health condition, there should be rapid release of the information that led to the decision to stop the trial. This approach could easily be accomplished by placing the data on the N.I.H. website or publishing the data on such platforms as bioRxiv.org, which enables fast, open review by the medical community.
. . .
Kudos to the scientists who conducted such a large, complex and important study with what will be likely to have lifesaving consequences for a condition that can be treated easily in most patients. Now the medical community needs to adopt a new approach in situations like this one to disseminate lifesaving results in a timely, comprehensive and transparent way. Lives depend on it.

For the full commentary, see:
ERIC J. TOPOL and HARLAN M. KRUMHOLZ. “Don’t Sit on Medical Breakthroughs.” The New York Times (Fri., SEPT. 17, 2015): A25.
(Note: ellipses added.)
(Note: the online version of the commentary has the date SEPT. 17, 2015, and the title “Don’t Delay News of Medical Breakthroughs.”)

Those Who Use “Consensus” Argument on Global Warming, Should Endorse Genetically Modified Food

(p. B3) NAIROBI, Kenya — Mohammed Rahman doesn’t know it yet, but his small farm in central Bangladesh is globally significant. Mr. Rahman, a smallholder farmer in Krishnapur, about 60 miles northwest of the capital, Dhaka, grows eggplant on his meager acre of waterlogged land.
As we squatted in the muddy field, examining the lush green foliage and shiny purple fruits, he explained how, for the first time this season, he had been able to stop using pesticides. This was thanks to a new pest-resistant variety of eggplant supplied by the government-run Bangladesh Agricultural Research Institute.
Despite a recent hailstorm, the weather had been kind, and the new crop flourished. Productivity nearly doubled. Mr. Rahman had already harvested the small plot 10 times, he said, and sold the brinjal (eggplant’s name in the region) labeled “insecticide free” at a small premium in the local market. Now, with increased profits, he looked forward to being able to lift his family further out of poverty. I could see why this was so urgent: Half a dozen shirtless kids gathered around, clamoring for attention. They all looked stunted by malnutrition.
. . .
I, . . . , was once in [the] . . . activist camp. A lifelong environmentalist, I opposed genetically modified foods in the past. Fifteen years ago, I even participated in vandalizing field trials in Britain. Then I changed my mind.
After writing two books on the science of climate change, I decided I could no longer continue taking a pro-science position on global warming and an anti-science position on G.M.O.s.
There is an equivalent level of scientific consensus on both issues, I realized, that climate change is real and genetically modified foods are safe. I could not defend the expert consensus on one issue while opposing it on the other.

For the full commentary, see:
MARK LYNAS. “How I Got Converted to G.M.O. Food.” The New York Times, SundayReview Section (Sun., APRIL 26, 2015): 5.
(Note: ellipses, and bracketed word, added.)
(Note: the online version of the commentary has the date APRIL 24, 2015.)

“Strong-Willed Scientists Overstated the Significance of Their Studies”

The New York Times seems open to the idea that strong-willed scientists might overstate their results in science food studies. I wonder if The New York Times would be open to the same possibility in science climate studies?

(p. A19) For two generations, Americans ate fewer eggs and other animal products because policy makers told them that fat and cholesterol were bad for their health. Now both dogmas have been debunked in quick succession.
. . .
Epidemiological data can be used to suggest hypotheses but not to prove them.
Instead of accepting that this evidence was inadequate to give sound advice, strong-willed scientists overstated the significance of their studies.
Much of the epidemiological data underpinning the government’s dietary advice comes from studies run by Harvard’s school of public health. In 2011, directors of the National Institute of Statistical Sciences analyzed many of Harvard’s most important findings and found that they could not be reproduced in clinical trials.
It’s no surprise that longstanding nutritional guidelines are now being challenged.
In 2013, government advice to reduce salt intake (which remains in the current report) was contradicted by an authoritative Institute of Medicine study. And several recent meta-analyses have cast serious doubt on whether saturated fats are linked to heart disease, as the dietary guidelines continue to assert.
Uncertain science should no longer guide our nutrition policy. Indeed, cutting fat and cholesterol, as Americans have conscientiously done, may have even worsened our health.

For the full commentary, see:
NINA TEICHOLZ. “The Government’s Bad Diet Advice.” The New York Times (Sat., FEB. 21, 2015): A19.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date FEB. 20, 2015.)

Experts Are Paid “to Sound Cocksure” Even When They Do Not Know

(p. B1) I think Philip Tetlock’s “Superforecasting: The Art and Science of Prediction,” co-written with the journalist Dan Gardner, is the most important book on decision making since Daniel Kahneman’s “Thinking, Fast and Slow.” (I helped write and edit the Kahneman book but receive no royalties from it.) Prof. Kahneman agrees. “It’s a manual to systematic thinking in the real world,” he told me. “This book shows that under the right conditions regular people are capable of improving their judgment enough to beat the professionals at their own game.”
The book is so powerful because Prof. Tetlock, a psychologist and professor of management at the University of Pennsylvania’s Wharton School, has a remarkable trove of data. He has just concluded the first stage of what he calls the Good Judgment Project, which pitted some 20,000 amateur forecasters against some of the most knowledgeable experts in the world.
The amateurs won–hands down.
. . .
(p. B7) The most careful, curious, open-minded, persistent and self-critical–as measured by a battery of psychological tests–did the best.
. . .
Most experts–like most people–“are too quick to make up their minds and too slow to change them,” he says. And experts are paid not just to be right, but to sound right: cocksure even when the evidence is sparse or ambiguous.

For the full review, see:
JASON ZWEIG. “The Trick to Making Better Forecasts.” The Wall Street Journal (Sat., Sept. 26, 2015): B1 & B7.
(Note: ellipses added.)
(Note: the online version of the review has the date Sept. 25, 2015.)

The book under review, is:
Tetlock, Philip E., and Dan Gardner. Superforecasting: The Art and Science of Prediction. New York: Crown, 2015.

“Stunned” Geophysicists Are Headed “Back to the Drawing Board”

(p. A3) Bringing the blur of a distant world into sharp focus, NASA unveiled its first intimate images of Pluto on Wednesday [July 15, 2015], revealing with startling clarity an eerie realm where frozen water rises in mountains up to 11,000 feet high.
. . .
At a briefing held Wednesday [July 15, 2015] at the Johns Hopkins Applied Physics Laboratory in Laurel, Md., mission scientists said they were stunned by what the images reveal.
“It is going to send a lot of geophysicists back to the drawing board,” said Alan Stern, the New Horizons project’s principal investigator, from the Southwest Research Institute in Boulder, Colo.

For the full story, see:
ROBERT LEE HOTZ. “Across 3 Billion Miles of Space, NASA Probe Sends Close-Ups of Pluto’s Icy Mountains.” The Wall Street Journal (Thurs., JULY 16, 2015): A3.
(Note: ellipsis, and bracketed dates, added.)
(Note: the online version of the article has the date JULY 15, 2015, has the title “NASA Releases Close-Up Pictures of Pluto and Its Largest Moon, Charon,” and has some different wording than the print version. The quote above follows the online version.)

Rather than Debate Global Warming Skeptics, Some Label them “Denialists” to “Link Them to Holocaust Denial”

(p. D2) The contrarian scientists like to present these upbeat scenarios as the only plausible outcomes from runaway emissions growth. Mainstream scientists see them as being the low end of a range of possible outcomes that includes an alarming high end, and they say the only way to reduce the risks is to reduce emissions.
The dissenting scientists have been called “lukewarmers” by some, for their view that Earth will warm only a little. That is a term Dr. Michaels embraces. “I think it’s wonderful!” he said. He is working on a book, “The Lukewarmers’ Manifesto.”
When they publish in scientific journals, presenting data and arguments to support their views, these contrarians are practicing science, and perhaps the “skeptic” label is applicable. But not all of them are eager to embrace it.
“As far as I can tell, skepticism involves doubts about a plausible proposition,” another of these scientists, Richard S. Lindzen, told an audience a few years ago. “I think current global warming alarm does not represent a plausible proposition.”
. . .
It is perhaps no surprise that many environmentalists have started to call them deniers.
The scientific dissenters object to that word, claiming it is a deliberate attempt to link them to Holocaust denial. Some academics sharply dispute having any such intention, but others have started using the slightly softer word “denialist” to make the same point without stirring complaints about evoking the Holocaust.

For the full commentary, see:
Justin Gillis. “BY DEGREES; Verbal Warming: Labels in the Climate Debate.” The New York Times (Tues., FEB. 17, 2015): D1-D2.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date FEB. 12 (sic), 2015.)

“Big Data” Does Not Tell Us What to Measure, and Ignores What Cannot Be Measured

(p. 6) BIG data will save the world. How often have we heard that over the past couple of years? We’re pretty sure both of us have said something similar dozens of times in the past few months.
If you’re trying to build a self-driving car or detect whether a picture has a cat in it, big data is amazing. But here’s a secret: If you’re trying to make important decisions about your health, wealth or happiness, big data is not enough.
The problem is this: The things we can measure are never exactly what we care about. Just trying to get a single, easy-to-measure number higher and higher (or lower and lower) doesn’t actually help us make the right choice. For this reason, the key question isn’t “What did I measure?” but “What did I miss?”
. . .
So what can big data do to help us make big decisions? One of us, Alex, is a data scientist at Facebook. The other, Seth, is a former data scientist at Google. There is a special sauce necessary to making big data work: surveys and the judgment of humans — two seemingly old-fashioned approaches that we will call small data.
Facebook has tons of data on how people use its site. It’s easy to see whether a particular news feed story was liked, clicked, commented on or shared. But not one of these is a perfect proxy for more important questions: What was the experience like? Did the story connect you with your friends? Did it inform you about the world? Did it make you laugh?
(p. 7) To get to these measures, Facebook has to take an old-fashioned approach: asking. Every day, hundreds of individuals load their news feed and answer questions about the stories they see there. Big data (likes, clicks, comments) is supplemented by small data (“Do you want to see this post in your News Feed?”) and contextualized (“Why?”).
Big data in the form of behaviors and small data in the form of surveys complement each other and produce insights rather than simple metrics.
. . .
Because of this need for small data, Facebook’s data teams look different than you would guess. Facebook employs social psychologists, anthropologists and sociologists precisely to find what simple measures miss.
And it’s not just Silicon Valley firms that employ the power of small data. Baseball is often used as the quintessential story of data geeks, crunching huge data sets, replacing fallible human experts, like scouts. This story was made famous in both the book and the movie “Moneyball.”
But the true story is not that simple. For one thing, many teams ended up going overboard on data. It was easy to measure offense and pitching, so some organizations ended up underestimating the importance of defense, which is harder to measure. In fact, in his book “The Signal and the Noise,” Nate Silver of fivethirtyeight.com estimates that the Oakland A’s were giving up 8 to 10 wins per year in the mid-1990s because of their lousy defense.
. . .
Human experts can also help data analysts figure out what to look for. For decades, scouts have judged catchers based on their ability to frame pitches — to make the pitch appear more like a strike to a watching umpire. Thanks to improved data on pitch location, analysts have recently checked this hypothesis and confirmed that catchers differ significantly in this skill.

For the full commentary, see:
ALEX PEYSAKHOVICH and SETH STEPHENS-DAVIDOWITZ. “How Not to Drown in Numbers.” The New York Times, SundayReview Section (Sun., MAY 3, 2015): 6-7.
(Note: ellipses added.)
(Note: the online version of the commentary has the date MAY 2, 2015.)

Insights More Likely When Mood Is Positive and Distractions Few

If insights are more likely in the absence of distractions, then why are business executives so universally gung-ho on imposing on their workers the open office space layouts that are guaranteed to maximize distractions?

(p. C7) We can’t put a mathematician inside an fMRI machine and demand that she have a breakthrough over the course of 20 minutes or even an hour. These kinds of breakthroughs are too mercurial and rare to be subjected to experimentation.

We are, however, able to study the phenomenon more generally. Enter John Kounios and Mark Beeman, two cognitive neuroscientists and the authors of the “The Eureka Factor.” Messrs. Kounios and Beeman focus their book on the science behind insights and how to cultivate them.
As Mr. Irvine recognizes, studying insights in the lab is difficult. But it’s not impossible. Scientists have devised experiments that can provoke in subjects these kinds of insights, ones that feel genuine but occur on a much smaller scale.
. . .
The book includes some practical takeaways of how to improve our odds of getting insights as well. Blocking out distractions can create an environment conducive to insights. So can having a positive mood. While many of the suggestions contain caveats, as befits the delicate nature of creativity, ultimately it seems that there are ways to be more open to these moments of insight.

For the full review, see:
SAMUEL ARBESMAN. “Every Man an Archimedes; Insights can seem to appear spontaneously, but fully formed. No wonder the ancients spoke of muses.” The Wall Street Journal (Sat., May 23, 2015): C7.
(Note: ellipsis added.)
(Note: the online version of the review has the date May 22, 2015.)

The book under review, is:
Kounios, John, and Mark Beeman. The Eureka Factor: Aha Moments, Creative Insight, and the Brain. New York: Random House, 2015.

Physicists Accepting Theories Based on Elegance Rather than Evidence

(p. 5) Do physicists need empirical evidence to confirm their theories?
. . .
A few months ago in the journal Nature, two leading researchers, George Ellis and Joseph Silk, published a controversial piece called “Scientific Method: Defend the Integrity of Physics.” They criticized a newfound willingness among some scientists to explicitly set aside the need for experimental confirmation of today’s most ambitious cosmic theories — so long as those theories are “sufficiently elegant and explanatory.” Despite working at the cutting edge of knowledge, such scientists are, for Professors Ellis and Silk, “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical.”
Whether or not you agree with them, the professors have identified a mounting concern in fundamental physics: Today, our most ambitious science can seem at odds with the empirical methodology that has historically given the field its credibility.

For the full commentary, see:
ADAM FRANK and MARCELO GLEISER. “Gray Matter; A Crisis at the Edge of Physics.” The New York Times, SundayReview Section (Sun., JUNE 7, 2015): 5.
(Note: ellipsis added.)
(Note: the date of the online version of the commentary is JUNE 5, 2015, and has the title “A Crisis at the Edge of Physics.”)

The controversial Nature article, mentioned above, is:
Ellis, George, and Joe Silk. “Scientific Method: Defend the Integrity of Physics.” Nature 516, no. 7531 (Dec. 18, 2014): 321-23.

Mathematician Says Mathematical Models Failed

The author of the commentary quoted below is a professor of mathematics at the Baltimore County campus of the University of Maryland.

(p. 4) . . . , in a fishery, the maximum proportion of a population earmarked each year for harvest must be set so that the population remains sustainable.

The math behind these formulas may be elegant, but applying them is more complicated. This is especially true for the Chesapeake blue crabs, which have mostly been in the doldrums for the past two decades. Harvest restrictions, even when scientifically calculated, are often vociferously opposed by fishermen. Fecundity and survival rates — so innocuous as algebraic symbols — can be difficult to estimate. For instance, it was long believed that a blue crab’s maximum life expectancy was eight years. This estimate was used, indirectly, to calculate crab mortality from fishing. Derided by watermen, the life expectancy turned out to be much too high; this had resulted in too many crab deaths being attributed to harvesting, thereby supporting charges of overfishing.
In fact, no aspect of the model is sacrosanct — tweaking its parameters is an essential part of the process. Dr. Thomas Miller, director of the Chesapeake Biological Laboratory at the University of Maryland Center for Environmental Science, did just that. He found that the most important factor for raising sustainability was the survival rate of pre-reproductive-age females. This was one reason, in 2008, after years of failed measures to increase the crab population, regulatory agencies switched to imposing restrictions primarily on the harvest of females.    . . .
The results were encouraging: The estimated population rose to 396 million in 2009, from 293 million in 2008. By 2012, the population had jumped to 765 million, and the figure was announced at a popular crab house by Maryland’s former governor, Martin O’Malley, himself.
Unfortunately, the triumph was short-lived — the numbers plunged to 300 million the next year and then hit 297 million in 2014. Some blamed a fish called red drum for eating young crabs; others ascribed the crash to unusual weather patterns, or the loss of eel grass habitat. Although a definitive cause has yet to be identified, one thing is clear: Mathematical models failed to predict it.

For the full commentary, see:
Manil Suri. “Mathematicians and Blue Crabs.” The New York Times, SundayReview Section (Sun., MAY 3, 2015): 4.
(Note: ellipses added.)
(Note: the date of the online version of the commentary is MAY 2, 2015.)

A Highly Mathematical Model Endorses Friedman’s View that Feds Directed Economics toward Highly Mathematical Models

(p. 1138) . . . , in many areas, the existing organization of research is characterized by large research institutions staffed with hundreds of
researchers and national funding agencies who set the research agenda for the field. Given the size of such institutions, if they decide to launch a new research program, then the critical mass of scholars can be reached with certainty, and individual researchers need not fear the coordination risk. Researchers should thus choose to work on that research topic, provided that they perceive an expected reward that is larger than s. (p. 1139) Unfortunately, if the large institution selects a poor idea (with a small or even negative θ), it would then be responsible for the emergence of a strand of research with modest scientific value. As an example, Diamond (1996) recalls Milton Friedman’s criticism of the U.S. National Science Foundation, which, in his opinion, has directed the economics profession toward a highly mathematical model.12
. . .
12. Ironically, his opinion is endorsed in this paper by a “highly mathematical model.”

Source:
Besancenot, Damien, and Radu Vranceanu. “Fear of Novelty: A Model of Scientific Discovery with Strategic Uncertainty.” Economic Inquiry 53, no. 2 (April 2015): 1132-39.
(Note: ellipses added; italics in original.)

The 1996 Diamond article mentioned above, is:
Diamond, Arthur M., Jr. “The Economics of Science.” Knowledge and Policy 9, nos. 2/3 (Summer/Fall 1996): 6-49.