(p. 1) . . . while all doctors agree about the importance of gauging the quality of evidence, many feel that a hierarchy of methods is simplistic. As the doctor Mark Tonelli has argued, distinct forms of knowledge can’t be judged by the same standards: what a patient prefers on the basis of personal experience; what a doctor thinks on the basis of clinical experience; and what clinical research has discovered — each of these is valuable in its own way. While scientists concur that randomized trials are ideal for evaluating the average effects of treatments, such precision isn’t necessary when the benefits are obvious or clear from other data.
Clinical expertise and rigorous evaluation also differ in their utility at different stages of scientific inquiry. For discovery and explanation, as the clinical epidemiologist Jan Vandenbroucke has argued, practitioners’ instincts, observations and case studies are most useful, whereas randomized controlled trials are least useful. Expertise and systematic evaluation are partners, not rivals.
Distrusting expertise makes it easy to confuse an absence of randomized evaluations with an absence of knowledge. And this leads to the false belief that knowledge of what works in social policy, education or fighting terrorism can come only from randomized evaluations. But by that logic (as a spoof scientific article claimed), we don’t know if parachutes really work because we have no randomized controlled trials of them.
For the full commentary, see:
PAGAN KENNEDY. “The Thin Gene.” The New York Times, SundayReview Section (Sun., NOV. 27, 2016): 1 & 6.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date NOV. 25, 2016.)
The academic article calling for double-blind randomized trials to establish the efficacy of parachutes, is:
Smith, Gordon C. S., and Jill P. Pell. “Parachute Use to Prevent Death and Major Trauma Related to Gravitational Challenge: Systematic Review of Randomised Controlled Trials.” BMJ 327, no. 7429 (Dec. 18, 2003): 1459-61.