What we work on
An essay in the Journal of the Royal Society of Medicine by Alpers et al. (J R Soc Med 2015; 108: 8–10) titled “Evolution of evidence-based medicine to detect evidence mutations” seems to attach evolutionary mechanisms to a social phenomenon, an approach I deeply distrust as it results in such warped concepts as social Darwinism. However, the authors use the allusion to genetics elegantly as a framework to illustrate examples of biased evidence in medical science. Their description of sometimes not-so hidden data that has not been considered adequately for the development of medical evidence is revealing: data from studies not backing marketing claims which is hidden in FDA reports. Studies conducted by a researcher found guilty of scientific misconduct which have not been retracted or data that has been published repeatedly in several papers. Meta-analyses thus might lack important studies or may over-value invalid data. Other biasing influences on meta-analyses identified by the authors are less clear-cut: studies performed with populations that are not representative of the main patient population for a treatment seem to be more of a rule that an exception, given that the common old and multimorbid patients most often encountered by physicians today are not eligible to most studies. The “academic inbreeding” caused when most results on a topic are provided by one working group might very well be true, yet there are many topics outside the mainstream of medical science that are only covered by a small group of scientists.
In fact, there are many more issues in study selection for meta-analyses with the potential to bias results that are at least as important as twisted data. Examples are language restrictions by the persons searching for studies, combinations of studies over large time spans when many other aspects of therapy have changed or differences in endpoint definition between studies. In fact, besides death there are few clinical events which do not leave room for deviations in their assessment. Taken together, the right selection of studies for meta-analysis will seldom be crystal clear, because authors will have different ways to decide which level of heterogeneity in certain parameters is acceptable. For this reason I do not agree with the viewpoint that there are too many meta-analyses around in most areas – some extreme examples of a meta-analysis-to-RCT ratio of 1:1 excluded BMJ. In fact, a set of meta-analyses on a topic reaching different conclusions provides strong evidence that the available studies have failed to provide sufficient evidence.
Coming back to the biased evidence denominated as “evidence mutations” by Alpers et al., it is the same as with genetic mutations: they may provide important different viewpoints on a topic or they may trigger proliferative growth of distorted evidence. You will only find out if you are willing to dig deep into the subject or find somebody experienced you can trust.