Nature recently published an article that highlights one of the uglier aspects of science that at times tends to plague students, postdocs and P.I.s alike: reproducibility.
The Nature editorial article focused on the work of the Reproducibility Project: Cancer Biology, which is a group dedicated to replicating experiments from over 50 papers published in big name journals like Science and Cell. While we always hope that replication studies go smoothly, that isn’t always the case.
The editorial spent a good chunk of its time discussing the attempt made to reproduce this 2010 paper that had some breakthroughs in tumor-penetration of cancer drugs:
Unfortunately, the reproducibility group had some different results than the original paper. And when I say different results, I mean that the replication study found no statistical significance whereas the original study found great significance for the following end-points:
The permeability or penetrance of doxorubicin was not enhanced when it was co-administered with IGRD peptide.
Tumor weights showed no statistical significant difference.
No difference was seen in TUNEL staining.
So what do we make of a result like this?
Well, we do believe it is important to state what we should NOT do. We shouldn’t entirely disregard the results of the 2010 paper. As stated previously, REPLICATING is not REPRODUCING. In order to properly reproduce evidence-based science, there needs to be different methods and multiple observations under diverse conditions. The reproducibility project seemed to use most of the same conditions, and one would think that these experiments should be reproducible….. But it wasn’t IN.THIS.CASE.
However, maybe we should not be focusing solely on the issue of reproducibility and instead ask if the effects of the IGRD peptide are similar to the findings of the 2010 paper when it is tested with other chemotherapeutics and/or cancer models. If the effects seen with the peptide are indicative of a true biochemical effect, the enhancements of permeability and penetration of chemotherapeutics when they are co-administered with the peptide should be seen across the board, regardless of the model.
To this end… there are currently 51 articles in PubMed that can be found with a simple search of “Tumor Penetrating Peptides”. Most of these 51 papers are not from the lab that published the 2010 paper.
NOW: Should we disregard this line of investigation and view it as bunk due to the failure to replicate? Thankfully, no. The 50 papers on PubMed indicate that this field of study is an active and growing body of research.
Unfortunately in our click-bait society, people will only read the headline and a select few sentences before drawing a conclusion. In fact, Nature spent most of the editorial on this one failure despite mentioning that 10 other labs have already validated the findings of the original 2010 paper. If 10 independent labs are able to reproduce the findings and only 1 lab has failed to do so, that’s science.
And truth be told, isn’t that the purpose of peer-review publications? Put yourself and your scientific ideas out there for the world to comment, replicate and reproduce? And then the body of evidence and then knowledge moves forward.