Site icon Youth Ki Awaaz

WTF: ‘A Large Fraction Of Scientific Research That Gets Published, Is False’

By Kabir Sharma:

Almost every day we read about a scientific study linking something or the other we eat or do with increased chances of contracting an ailment. Often the next day we read the exact opposite has been proven by science. Which is true?
It turns out, probably neither. [envoke_twitter_link]A large fraction of scientific research that gets published, is false[/envoke_twitter_link].

This astonishing revelation is the result of Dr. John Ioannidis- Professor of Health Research and Policy at Stanford University – and team’s pioneering meta-research (research on research) on the truth of published scientific findings.
In his 2005 paper, strikingly titled ‘Why Most Published Research Findings Are False‘, Ioannidis- a sort of a hero in the scientific community now – found two major factors behind this. Non-conformance of scientific good practices, such as choosing nonrandomized (i.e. selective) and small sample sizes, statistical misinterpretation, etc; as well as bias. The very existence of bias, defined as “the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced“, is a most dangerous sign for science.


What it means is that if a research team has the incentive to arrive at a particular result, and has enough leverage in their method, it is likely that it would be able to show what it wants to.

Though the paper has become hugely influential and popular among the scientific community it targeted, it surprisingly did not shock many scientists. It seems everyone already had an idea of what was going on.

Consensus Across Fields: Irreproducibility

Ioannidis’ work showed theoretically how, and how much the above factors influence published scientific papers, and focused on the biomedical sector (which has direct implications on healthcare). The findings have since stood consistent with various empirical studies. Three years ago, when drugs company Amgen tried to replicate the “landmark publications” in the field of cancer research, 47 out of 53 could not be replicated. Researchers at pharmaceutical companies have reported that their attempts to replicate the conclusions of peer-reviewed papers fail at rates upwards of 75%.

You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct,says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.

Over recent years Ioannidis’ findings have shown to hold true in multiple fields of enquiry as well; including psychology, economics, as well as the physicochemical and social sciences.

The major findings of the seminal research work are discussed below.

The Greater The Financial And Other Interests And Prejudices, The Less Likely The Findings Are To Be True.

For a large fraction of studies, the former factor is debilitating. Funders want results to be in line with what they want to project, and they seem to be getting what they want. It is always worth checking to see who funded a piece of research.

A very recent publication showed an astonishing 97% of drug trials back the firm that paid for them. A study finding chocolate to be “scientifically proven to help with fading concentration” was funded by Hershey. Tobacco companies have a long history of funding fraudulent health research — getting the WHO to call this “the most astonishing systematic corporate deceit of all time.” Today this legacy is carried forward by oil companies who are funding scientists denying global warming and dozens of groups sowing doubts about climate change.

Another issue is that scientists need publications to progress in their careers, and people like to see big, flashy results. Hence, there is much incentive to fudge studies. Such manipulation could be done, by “serendipitous inclusion or exclusion of certain patients or controls… commercially available data mining packages actually are proud of their ability to yield statistically significant results through data dredging.

The Hotter A Scientific Field, The Less Likely The Findings Are To Be True

Fast-moving fields of research with many competing teams working on the same problem see high pressure to come up with positive results, resulting in the publication of numerous false findings. And higher the number of teams, lower the probability of a published result being true!

And unfortunately, in many fields the siloed investigator writing grants promising to discover something worthy of the Nobel Prize is still the dominant paradigm.

The Smaller The Study, And Smaller The Impact, Less Likely The Findings Are To Be True

Small studies (taking a small set of people, or animals, or anything else) are far more likely to result in statistically significant results that are in fact a false positive, so they should be treated with caution.

Most effects of a big magnitude – like the link between smoking and lung cancer – have already been recognized. Studies today deal with much smaller effects, and consequently to find meaning with the small ‘signal to noise’ ratio, they need to be a lot more sensitive, which they aren’t.

Delving a little deeper, the extent of these issue becomes clearer.

Publication Bias

Scientific journals, the media, people in general- like to see positive results. Research teams reaching positive and flashier conclusions by fluke, or by coercive means, are thus more likely to be published than others showing negative results, perhaps through more robust analyses.

The problem starts at the funding stage, where only the flashiest proposals get funding, and then are bound to deliver. And indeed, for reasons of career, tenure and funding, it is important for scientists to publish prolifically, resulting in a huge amount of manipulation.

Isabelle Boutron, a professor of epidemiology at René Descartes University in Paris, points out that peer reviewers are influenced by trial results; a study showing they not only favored a paper showing a positive effect over a near-identical paper showing no effect, they also gave the positive paper higher scores for its scientific methods.

Harms Of Little Negative Research Getting Published

Wakefield’s study had concluded that the measles-mumps-rubella vaccine caused autism, and became a huge sensation when it came out. Later, a study definitively refuted it, but failed to get the same response from the public. Indeed, negative results rarely become popular, even if they get published. This gives a skewed representation of evidence. Due to this, studies continue to be referenced for years after they are debunked, and a lot of research builds on that which was wrong in the first place.

Besides, many scientists don’t like to step on each other’s toes. “They feel a lot of pressure not to contradict each other,” said Elizabeth Iorns of Science Exchange. “There’s a lot of evidence that if you do that, it’ll be negative for your career.

Problems In The Peer Review Process

Though scientists constantly praise the value of the peer-review process, researchers admit among themselves that biased, erroneous, and even blatantly fraudulent studies frequently slip through it. Nature, the epochal science journal, stated in a 2006 editorial, “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.” What’s worse, the peer-review process often pressures researchers to shy away from doing genuinely groundbreaking research, instead building on the findings of their colleagues (that is, their potential reviewers).

Further, reviewers are often unable to spend enough time to go through papers in all their depth before accepting them. Even worse, famous scientists may suppress via the peer review process the publication of findings that refute their findings, condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable.

Monetary Disaster

A series of articles in the Lancet noted that, in 2010, about $200 billion (a staggering 85% of the world’s spending on medical research) was squandered on studies that were flawed in their design, redundant, never published or poorly reported.

Way Ahead

Among suggestions to improve the state of affairs, it is a consensus that steps to ensure replicability of studies, by for example, improving data accessibility must be incorporated. An increasing number of journals have recently begun to emphasize this as well as the importance of negative results, including Nature and Science.

Also, investigators ought to be rewarded for performing good science rather than for submitting flamboyant grant proposals and papers making extravagant claims. Better peer review at multiple levels, including post-publication and promoting large scientific teams are other suggestions.

Ioannidis, now director of the new Meta-Research Innovation Centre at Stanford (METRICS) designed to perform research on research, still remains a great believer in the scientific method; and feels it is the greatest thing achieved by humans. “I am optimistic. I think that science is making progress. There’s no doubt about that. It’s just an issue of how much and how quickly.

Exit mobile version