NEW YORK (Reuters) - So many scientific studies are making incorrect claims that a new service has sprung up to fact-check reported findings by repeating the experiments.
A year-old Palo Alto, California, company, Science Exchange, announced on Tuesday its “Reproducibility Initiative,” aimed at improving the trustworthiness of published papers. Scientists who want to validate their findings will be able to apply to the initiative, which will choose a lab to redo the study and determine whether the results match.
The project sprang from the growing realization that the scientific literature - from social psychology to basic cancer biology - is riddled with false findings and erroneous conclusions, raising questions about whether such studies can be trusted. Not only are erroneous studies a waste of money, often taxpayers’, but they also can cause companies to misspend time and resources as they try to invent drugs based on false discoveries.
“‘Published’ and ‘true’ are not synonyms,” said Brian Nosek, a psychology professor at the University of Virginia in Charlottesville and a member of the initiative’s advisory board.
Last year, Bayer Healthcare reported that its scientists could not reproduce some 75 percent of published findings in cardiovascular disease, cancer and women’s health.
In March, Lee Ellis of M.D. Anderson Cancer Center and C. Glenn Begley, the former head of global cancer research at Amgen, reported that when the company’s scientists tried to replicate 53 prominent studies in basic cancer biology, hoping to build on them for drug discovery, they were able to confirm the results of only six.
The new initiative, said Begley, senior vice president of privately held biotechnology company TetraLogic, “recognizes that the problem of non-reproducibility exists and is taking the right steps to address it.”
The initiative’s 10-member board of prominent scientists will match investigators with a lab qualified to test their results, said Elizabeth Iorns, Science Exchange’s co-founder and chief executive officer. The original lab would pay the second for its work. How much depends on the experiment’s complexity and the cost of study materials, but should not exceed 20 percent of the original research study’s costs. Iorns hopes government and private funding agencies will eventually fund replication to improve the integrity of scientific literature.
The two labs would jointly write a paper, to be published in the journal PLoS One, describing the outcome. Science Exchange will issue a certificate if the original result is confirmed.
Founded in 2011, Science Exchange serves as a clearinghouse that connects researchers who want to outsource parts of experiments, from DNA sequencing ($2.50 per sample) to bioinformatics ($50 per hour). It is funded largely by venture capitalists and angel investors.
INCENTIVES FOR INTEGRITY
It may not be obvious why scientists would subject their work to a test that might overturn its results, and pay for the privilege, but Iorns is optimistic. “It would show you are a high-quality lab generating reproducible data,” he said. “Funders will look at that and be more likely to support you in the future.”
If results are reproduced, “it will increase the value of any technology the researcher might try to license,” she said, adding that it would also provide assurance to, say, a pharmaceutical company that the result is sound and might lead to a new drug.
Experts not affiliated with Science Exchange noted that if science were working as it should, the initiative would not be necessary. “Science is supposed to be self-correcting,” said Begley. “What has surprised me is how long it takes science to self-correct.” There are too many incentives to publish flashy, but not necessarily correct, results, he added.
Virginia’s Nosek experienced the temptation firsthand. He and his colleagues recently ran a study in which 1,979 volunteers looked at words printed in different shades of gray and chose which hue on a color chart - from nearly black to almost white - matched that of the printed words. Self-described political moderates perceived the grays more accurately than liberals or conservatives, who literally saw the world in black and white, Nosek said.
Rather than publishing the study, Nosek and his colleagues redid it, with 1,300 people. The ideology/shades-of-gray effect vanished. They decided not to publish, figuring the first result was a false positive.
Typically, scientists must show that results have only a 5 percent chance of having occurred randomly. By that measure, one in 20 studies will make a claim about reality that actually occurred by chance alone, said John Ioannidis of Stanford University, who has long criticized the profusion of false results.
With some 1.5 million scientific studies published each year, by chance alone some 75,000 are probably wrong.
In addition, Ioannidis said, “people start playing with how they handle missing data, outliers, and other statistics,” which can make a result look real when it’s not.
“People are willing to cut corners” to get published in a top journal, he said.
There are numerous ways to do that. Researchers can stop collecting data as soon as they obtain the desired result rather than gather more as originally planned. Conversely, they can continue to gather data until they get the desired result.
How common might such sleights of hand be? In a 2005 paper in PLoS Medicine, Ioannidis used statistical and other methods to show that “most published research results are wrong.” It remains the most-viewed paper in the journal’s eight-year history.
“Until recently, people thought you could trust what’s published,” Ioannidis said. “But for whatever reason, we now see that we can’t.”
(Note: Amgen researcher C. Glenn Begley is not related to the author of this story, Sharon Begley.)
Reporting by Sharon Begley; editing by Julie Steenhuysen and Douglas Royalty
Our Standards: The Thomson Reuters Trust Principles.