Don't Believe What You Read, Redux

In 2005, John P. A. Ioannidis of Greece’s University of Ioannina School of Medicine and Tufts University School of Medicine in Boston shook up the world of science with his provocatively-titled, and frighteningly-well reasoned, paper, “Why Most Published Research Findings Are False” in PLoS Medicine. Now he’s back, no more sanguine about the state of biomedical science. Bottom line: when it comes to “the latest studies,” take what you read with a grain of salt.

Make that a shaker of salt.

In a paper published this evening, he and his co-authors—Neal S. Young of the National Institutes of Health and Omar Al-Ubaydli of George Mason University in Virginia—argue that “the current system of publication in biomedical research provides a distorted view of the reality of scientific data that are generated in the laboratory and clinic.” Negative results are not reported, statistical flukes are not caught, and the result is a distortion of biomedical reality.

Example: A 2005 study in the Journal of the American Medical Association found that “initial clinical studies are often unrepresentative and misleading." Of the 49 most-cited papers on the effectiveness of treatments for various diseases, published in top journals from 1990 to 2004, one-quarter of the randomised trials and five of six non-randomised studies had already been contradicted or found to have been exaggerated by 2005.

Lesson: if a finding is important, it will be replicated. Until it is, don’t believe it. How long might you have to wait? “The delay between the reporting of an initial positive study and subsequent publication of concurrently performed but negative results is measured in years,” the scientists write.

In general, small studies are less likely to be true, as are studies that find only a small effect (of, say, a treatment for disease, or of a disease-causing compound or behavior), studies in which the scientists have a financial interest and studies in a field where many teams are chasing statistical significance.

In one of the more disturbing examples, which I blogged on in January, scientists found that the public and even doctors have gotten a distorted view of the effectiveness of anti-depressants. Why? Because “among 74 FDA-registered studies, 31% . . . were not published.” Simply put, manufacturers flood the scientific literature with studies that make their drug look good, and bury the other ones. Of 37 positive studies on anti-depressants given to the FDA, one was not published; of the negative studies of anti-depressants given to the FDA, 22 were not published and 11 were twisted to convey a positive outcome. That made it seem that 94% of the studies were positive, whereas only 51% of studies submitted to FDA were.

As I said, make that a shaker.