Why is it that nobody can reproduce anybody else’s findings?
1/2
Pharmacologist exposes rampant incompetence and outright fraud in biomedical research
Biomedical scientists around the world publish around 1 million new papers a year. But a staggering number of these cannot be replicated. Drastic action is needed now to clean up this mess, argues pharmacologist Csaba Szabo in Unreliable, his upcoming book on irreproducibility in the scientific literature.
“The things that we’ve tried are not working,” says Szabo, a professor at the University of Fribourg. “I’m trying to suggest something different.”
Unreliable, which will be available in March, is a disturbing but compelling exploration of the causes of irreproducibility—from hypercompetition and honest errors, to statistical shenanigans and outright fraud.
In the book, Szabo argues that there is no quick fix and that incremental efforts such as educational workshops and checklists have made little difference. He says the whole system has to be overhauled with new scientific training programs, different ways of allocating funding, novel publication systems, and more criminal charges for fraudsters.
“We need to figure out how to reduce the waste and make science more effective,” Szabo says.
This interview was edited for clarity and length.
Why did you write this book?
As most scientists, I go to scientific meetings and go out for beers with other scientists and this topic keeps coming up. “Why is it that nobody can reproduce anybody else's findings?” we ask. This is not a new question. I was sitting around a table with some colleagues and collaborators in New York at the start of my sabbatical, and one of them said, “Somebody should write a book about this.” Another said to me, “You are on sabbatical,” and to be honest, I’d vaguely had this idea already.
It felt like everybody wanted this book to happen, but nobody had wanted to write it.
What were your biggest takeaways?
If you look at all the published literature—not just the indexed articles on PubMed but everything that is published anywhere—probably 90% of it is not reproducible. That was shocking even to me. And probably 20–30% of it is totally made up.
I didn't expect the numbers to be that high going into this. It’s just absurd, but this is what I came to conclude.
Just think about all the money wasted. And paper mills [fraudulent organizations that write and publish fake research] make several billion dollars per year. This is a serious industry.
Another big shock was learning that the process of trying to do something about the reproducibility crisis—trying to clean up the literature, trying to identify bad actors—is not done by the establishment, by grant-giving bodies, by universities, by journals or by governments. Who is doing this? Private investigators, working on their free time at home, while worried about who is going to sue them.
What kind of a system is this?
You propose some radical solutions. Walk me through these.
This is an ecosystem. You need to reform the education, the granting process, the publication process, and everything else all at once.
If you start on the education side, what I propose is that there should be a separate training tracks for “scientific discovery” and “scientific integrity.” People need to be trained in how to do experimental design, statistical analysis, data integrity, and independent refereeing of papers and grants—and this should be its own separate profession.
One the granting side, for the most part right now we work in a system where each scientist is viewed as a solo artist, and they submit individual grants to get funding. But in the end, big research institutions tend to keep getting the same amount of money each year. So why do all of this? Why not instead give each institution a lump sum of money for research and attach reproducibility and replication and scientific integrity requirements to the funds.
That one is really controversial. It would put the burden on the institution to figure out the best way to spend the money. It would require very smart institutional leadership, with long-term vision and the ability to identify quality science. It wouldn't be easy. But you could over time get to see which institutions funded projects that were reproducible and translatable, and which ones didn’t.
A benefit of this approach is that people could spend less time writing grants and more time on research.
Then on the publication side, top journals could start to prioritize submissions that come with replication supplements. Once a scientist has made a new discovery, they would ask an outside laboratory to try to reproduce it, and then attach that as a supplemental submission to give a higher level of confidence in their work. I think this is actually a good idea and not that hard to implement.
1/2