In my personal blog, I reported on a conference that dealt with the so-called Decline Effect. The phenomenon behind this is as follows: Often, at the beginning of a research series, a particularly clear or exciting effect emerges, at least that is what publications suggest. When others want to repeat these results, they often find less clear effects. Often the effects diminish or cannot be confirmed. Such subsidence effects can be observed throughout biomedical research, but also in psychological, biological and especially parapsychological research, and have long been known.
Causes for the Decline Effect
This comes – mainly, but not exclusively – from the following phenomenon: Often basic researchers – especially these, because they work with assays and experiments that are relatively quick and easy to perform – conduct a few test experiments in new fields. They don’t bear fruit and are dismissed. Then someone tinkers around a bit and suddenly a significant result comes out. This is published. The negative ones are not mentioned, of course. Who cares about that? Because the journals, the editors, the scientists, the public in general is interested in positive findings, not negative ones. So now a new phenomenon is born. Because it is new and perhaps even spectacular, it is published in the well-known journals with high impact. For it is their business to bring such new, stirring findings to the people. Now a few critical minds come along, read it and copy it. Some without success. They then think to themselves: „maybe we did something wrong“ and dismiss the negative results. Or, if they are more stubborn, they don’t believe that the originally published result was correct, do several replications and variations and then try to publish their negative finding. This will certainly be more difficult than publishing the original positive finding. They may have to write to several journals, revise their text based on critical reviews, submit a few more experiments. Because it is likely that friends, acquaintances or colleagues of the original research group, if not members of that group themselves, are among the reviewers of the negative replication study. So it takes time for the negative findings to be published, if they see the light of day at all.
And that way myths are formed
The originally positive findings make it into the public consciousness: selective serotonin reuptake inhibitors are effective for treating depression! So they say. Until, decades later, the negative studies become known, and it becomes clear that they are not that effective. By then the myth has already been born. Or: attention deficit disorder (ADHD) is a brain disease with a clear problem in the dopamine transporters of the basal ganglia, it is proclaimed. It is obvious that this can only be remedied pharmacologically. Until the corresponding follow-up studies come along that do not confirm the original finding.
For this very reason, it is methodologically important to think carefully about at what stage of the research process a study is. Is it the very first to claim a new finding? Then caution is required. Is it a study that replicates existing data? Then it is important to look at the effect size. Is it about the same size as the original? Then the result is robust. Is it much smaller? Then the original finding probably overestimates the effect. This is precisely why large studies are more reliable and meta-analyses even more reliable in estimating effects. But even these cannot hide the fact that there is a problem when initially negative findings are suppressed, or when negative findings are published with a long delay or not at all. Then an effect is suggested where there is none at all.
This is exactly what Ioannidis (1) showed some time ago, and thus set off a heated debate. Namely, he has claimed that most published research is wrong, precisely for the reason described above.
Representation of scientific results in the media
Now, two recent articles have revisited this issue, but with two very disturbing outcomes (2, 3). Gonon and colleagues (2) show in their study that this very process also dominates public discourse. They use the example of ADHD, pick out the so-called „top 10“ studies that have been most reported in the public press, and follow their fate. All of these „top 10“ studies had as their subject spectacular new reports of „progress“ that science had supposedly made in ADHD research. Following up on the reports, of these 10 reports of progress, just one remained stable. The others were either later refuted, or substantially weakened. What is now worrying is that the press only reported the initial euphoria in detail. The follow-up studies hardly received any attention. After all, they appeared in less high-profile journals. And in many cases, the initial positive opinion is still haunting the minds of the public, even though it has long since been disproved. Only no one has noticed because the press no longer reports this. After all, it’s embarrassing when you have to revise your own euphoria. I recommend everyone read the online study for themselves or look at the graphics it contains: Decline effects galore and at their finest. This does not inspire much confidence in the mainstream attitude towards treating ADHD with Ritalin.
The second study (3), on a related topic, shows that our press is not particularly good at spotting misinterpretations that authors give their studies when the desired result has not come out. The authors analysed almost 500 press reports from 70 randomized trials. In just under half of the studies, there was a pink bias in the abstract or text of the study. The authors made the data appear better and more robust than they actually were. This creates the same effect when the press conveys this opinion in this way. People think they have found a positive result, where in reality there is none. And lo and behold: the supposedly so critical journalists of the newspapers were apparently unable to detect the so-called „spin“ resulting from an overly benevolent interpretation of the study results and continued to convey it in their reports. In a regression analysis, the only predictor of whether a „spin“ appeared in a press release, i.e. a positive gloss on an otherwise not-so-spectacular result, was whether such spin was present in the conclusions of the abstract of the relevant study.
What does the critical reader conclude from this? Right: journalists are far too busy to read a study carefully; maybe they are not really competent either, that could still be. They prefer to follow the conclusions that the author himself gives to his study in the abstract. Maybe they even only read the abstract. In any case, most of them are obviously incapable of really analysing and reading studies critically. And so, transported via the media, a hype is created about data and results that will in all likelihood later turn out to be untenable.
What do we learn from this? Three lessons:
1. One swallow does not make a summer. Always wait to see if follow-up studies confirm initial findings.
2. Summer rarely comes anyway, and when it does, it comes very late. We live, scientifically speaking, far north of the Arctic Circle and have fewer real findings and breakthroughs than we think, at least as far as medicine and the health sciences are concerned.
3. Whatever is in the scientific press: it is a good heuristic to first also believe the opposite of what is reported to be true.
Literature
- Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124
- Gonon, F., Konsman, J.-P., Cohen, D., & Boraud, T. (2012). Why most biomedical findings echoed by newspapers turn out to be false: The case of Attention Deficit Hyperactivity Disorder. PLoS ONE, 7(9), e44275. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0044275
- Yavchitz, A., Boutron, I., Bafeta, A., Marroun, I., Charles, P., Mantz, J., et al. (2012). Misrepresentation of randomized controlled trials in press releases and news coverage: A cohort study PLoS Medicine, 9(9), e1001308. http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1001308.1001308