Randomized control trials have long been considered the “gold standard” of medical research. RCTs are typically large-scale studies that randomly assign individuals to an intervention or control group in order to measure the positive or negative effects of the intervention.Their results are often regarded as irrefutable proof, for they compare how one group responds to a treatment against how an identical group fares without it.
However, recent meta-research by Dr. John Ioannidis suggests otherwise. Using several different methodologies, Dr. Ioannidis of the Department of Hygiene and Epidemiology at the University of Ioannina in Greece has demonstrated that as much as 90% of published medical research is flawed—ranging from exaggerated or misleading, to entirely incorrect. This bias can result from the questions being asked, the study design, who is recruited, what is measured, and more often how the data is analyzed and presented. TuftsNow states that: "Ioannidis and his colleagues found that approximately a quarter of the most influential randomized studies in the medical literature were refuted within a few years, and even the large ones, which offer more protection from error, can be wrong 10 percent of the time."
These results have understandably ignited significant debate around the usefulness of randomized control trials. What are the reasons for this apparent phenomenon? Ioannidis argues that it stems from the incentive structure motivating researchers—in order to obtain increasingly sparse funding and tenured positions, scientists are more likely to manipulate or overstate results, often unintentionally. This process weakens the reliability of medical research.
The peer-review process was designed to filter out such erroneous or fraudulent studies. However, only the most prominent findings are actually re-tested, because only then can researchers expect to gain funding and renown. What other solutions might rectify this situation?
According to Ioannidis, scientists must admit their mistakes instead of disguising them as successes. Also, studies must be longer in duration in order to measure long-term health outcomes and disease patterns, rather than easily measured health markers, or “soft outcomes”. For instance, we imagine that he'd say that a study on postpartum hemorrhage should measure changes in the rate of maternal mortality over, say 5 years, not just marginal reductions in blood loss over six months when utilizing just one particular intervention.
In light of this debate, what can we really conclude from the recent multitude of RCTs on misoprostol? There are quite a number of studies on misoprostol ongoing, but by quite a few we mean a dozen. Perhaps not enough by Ionnidis standards. We would posit that in the instances where only one study is being done on, say, tranaxemic acid or other potentially life-saving drugs and devices- the RCT will be insufficient information. What implications does this controversy have for the potentially pioneering work being done by J-PAL? A recent BMC Pregnancy & Childbirth meta-analysis noted that RCTs for a single intervention in maternal health typically have not shown much effect, perhaps because they tended to be of short duration. However, this rather uplifting study did note that “[a]ll programs which most successfully reduced maternal mortality and remarkable EmOC indicators, had established functioning maternal health care systems with access to skilled birth attendants equipped with appropriate drugs, supplies and equipments and systems of referral to higher levels of care in the event of obstetric complications.”