Reflect, Review and Summarize

It's that special time of year when we all dedicate our minds, words and actions to appreciative and charitable acts.  Like most people, this weekend, I have been reflecting on all the friendships and collaborations I've formed over the years and how they have contributed to me collectively as a scientist and as a person.  In the last week or so, I've been particularly thankful for peer review.  WHAT? > ? > ? !!!

 Yep, lately there has been a lot of news about academic peer review.  Yes, I’ve been subjected to the wide spectrum of harsh criticisms and complimentary praise.  Each one; however, had a lesson or two to be learned.  Like a paper that received expedited review and acceptance only because it was considered ‘exciting, burgeoning subject matter’ that the editor thought would have high readership.  The actual work,…. eh.  Or the one that generated little to no excitement but has been highly cited.  I believe we all have read something and said, how’d this get published, or worse, funded?  And then we see the name of that well-known author and said,… “oh”.  Suffice to say, I’ve generally regarded peer review as a gambling expedition but one that I have to play, so let the learning commence. 

 Lately, there has been a buzz around academic peer review.  Much more than the standard, it’s wrought with problems, blah, blah… but useful dialogue in blogs and syndicated press.  Obviously, there was the complete integrity failure of Hyung-In Moon.  Here’s an account of how he created bogus reviewers to expedite positive reviews and publication acceptance:  Nature News.  But I see another cause for concern in the peer review process. 

 It seems that peer review is a one-time-road-block.  If you pass the review, the work is considered true and real forever, and ever.  Any challenge to the newly minted dogma faces an even harder review process.  This is in stark contrast to the essential and all-inclusive purpose of the peer review process, “Does the conclusion drawn follow from the evidence presented?”  Isn’t this why manuscripts have the standard format, Introduction, Methods, Results and Conclusion?  This is because anyone with sufficient background in the subject matter could read the paper and answer that essential peer review question. 

 Secondly, are we properly providing the best summary for the evidence provided?  Not the typical ‘fishing for significance’ or finding that one statistical test that shows that the effect is real.  But really, are we being true to the magnitude and stability of the summarized effect?  Not a question of whether it’s real, but rather, will it / can it be reproduced regardless of technology, methodology or statistical inference used?  This question brings me back to when I was learning proteomics.  No one would trust the mass spectrometer until it was validated by a Western blot.  As an analytical chemist, I found this simply arse-backwards.  Like, really?, you think an antibody has more specificity than a mass spectrometer?

 Fortunately, now there is a new paradigm of peer review growing. While PLoS  paved the way, now PeerJ are striving to improve the art and skill of scholarly communications.  These, coupled to the vast communication of the Internet, open access licensing and even public data repositories I have hope we can finally move towards being true to a summarized effect.

The Complexities of Drug Discovery

Know What You Don't Know