What we work on

Citing good news

Contrary to the public media, scientific authors seem to prefer good news over bad ones. They cite studies with positive results significantly more often than studies reporting to have found nothing new. Is this behaviour scientifically unsound?

A Dutch study group (Duyx et al. 2017) published a meta-analysis of reviews on the effects of positive versus negative study results on the number of citations in other papers. Though the authors included data from social, natural and biomedical sciences, the latter provided 73% of all studies included into the review and 86% of studies included into the meta-analysis. Thus, the results mainly reflect the situation in medical sciences. The authors found that studies with statistically significant findings were cited 1.6 times more often than those with non-significant ones. Furthermore, articles stating that the results support a hypothesis of the authors were cited 2.7 times more often.

In their meta-analysis, Duyx et al. identified the journal impact factor as most relevant for citation frequency, followed by positive study results. In contrast, research quality, sample size and research design featured lower on the list of relevant factors. However, as the impact factor reflects citation frequency on the level of journals, it is not really independent from the citation frequency for individual journal articles.

According to the Dutch investigators, the focus of authors citing others on positive results causes a citation bias, because studies with positive results are cited more often than those with negative results. They claim that this adds to the well-known publication bias, causing “an over-representation of positive results and unfounded beliefs”. In the discussion of their findings they state: “our results suggest that citations are mostly based on the conclusion that authors draw rather than the underlying data.”

After the initial, spontaneous nod, one starts to wonder: do the authors really implicate that significant results in peer reviewed journals give rise to unfounded beliefs? I am not sure if this can be stated in such a general way. If a study reports significant results, it is exactly this fact itself which makes the study attractive for citations. And that likelihood of citations increases if the study authors consider them proof of their hypothesis might rather indicate the clinical relevance of the findings, which makes them more attractive for citations.

I feel the authors of the meta-analysis should be more explicit in stating that they have identified a correlation, which does not prove causation. It is the randomised, controlled trial with a significant finding that identifies causal relationships, thereby providing a substantial contribution to scientific advancement which deserves to be cited. And a significant finding may also be a negative one, e.g. if it documents increased mortality. On the other hand, a trial without significant results leaves one guessing: is the researched effect non-existent? Is it there but the study design was insufficient to identify it? Thus, non-significant results of a single study are simply less relevant for individual authors of other papers. So maybe we are not seeing another proof of scientists citing uncritically the positive studies but simply a proof that studies with significant results are more relevant for the advancement of science than studies without significant findings.

Back to overview