Should the scientific peer review process be seen as a failed experiment?

in Popular STEM3 months ago

On substack, a researcher argues that the scientific peer review process delivers lackluster results at exorbitant costs and that it should be seen as a 60-year failed experiment. Can Steem be part of the solution?


Pixabay license from WikiImages at source.


In The rise and fall of peer review, Adam Mastroianni argues that the scientific peer review process amounts to a 60 year experiment, that the experiment has failed, and that the process should be abandoned. In support of this argument, he offers evidence that the peer process is expensive and that it has delivered no clear benefits. Further, he argues that the process delivers a great deal of errors and fraud, and that - if we look at actions instead of words - even professional scientists don't take the peer review process all that seriously.

I'll look more at these ideas in the following sections.

The process is expensive and remains unproven after ~60 years

The first point that he makes is that the scientific is an exercise in money for nothing. To support this claim, he notes that reviewers spend an astronomical 15,000 years of collective effort during each calendar year. He also notes that universities have to fork out millions of dollars for access to the academic journals, and that a single paper can take months or years to get through the peer review process.

Given these massive costs, Mastroianni argues that we should be able to find compelling evidence that peer review is making scientific research better. Instead, however, he notes that we're seeing a decline in productivity, studies that don't replicate, and that most published research is false.

Looking at the post-war scientific research as a 60 year experiment (run badly), Mastroianni suggests that,

All we can say from these big trends is that we have no idea whether peer review helped, it might have hurt, it cost a ton, and the current state of the scientific literature is pretty abysmal. In this biz, we call this a total flop.

The process is susceptible to errors and fraud

Highlighting the "abysmal" state of the scientific literature, he notes that studies of reviewers have shown that more than half of the errors slip past the reviewers, with common findings of success rates running from 25-30% of errors being found by reviewers. As a result, the literature is riddled with errors and outright fraud. As a long time follower of the Retraction Watch blog, this point definitely resonates with me. I would add that when errors are found, it often takes months, years, or even decades to get the papers retracted or corrected.

Mastroianni hammers the point home by highlighting a couple egregious cases, one where an author simply pasted capital "T" characters into a "statistical" chart in order to represent error bars, and another suggesting that 20% of genetics papers have worthless data behind them because Microsoft Excel autocorrected gene names into dates. Further, he points out that most reviewers don't even have access to the data that is supposed to support the papers that they're reviewing.

Even professional researchers don't take it seriously (actions speak louder than words)

If professional researchers took peer review seriously, it would be apparent in their actions, not just in their language. Instead, we find that professional researchers don't really trust the process.

To back this point, Mastroianni raises three points:

  • When a paper is rejected, they don't go "back to the drawing board". Instead, they just shop around until they find a journal to accept the work.
  • The reviews are forgotten after a paper is published, suggesting that no one ever cared what the reviewers had to say, anyway.
  • Scientists use a plethora of unreviewed sources: preprint servers, government documents, and polls from organizations like Gallup or Pew.

Case in point, this non-reviewed paper was extremely influential in bringing awareness of the concept of "technical debt" to IT researchers and practitioners.

In the end, if scientists really believed in the peer review process, they would rely on it exclusively. The fact that they freely utilize other sources suggests that peer review isn't such an important commodity to the experts who know it best.

Fix the peer review process or abandon it?

According to this view, there's no sense trying to fix the process. Making the review process more stringent would mean that it takes longer for papers to get through the process, which would just mean productivity would get even lower. Further, mistakes would continue to slip through. Also, according to this article, the demands from the reviewer for particular jargon and style leads to boring papers that no one reads. A stricter review process would aggravate that problem, too.

In the end Mastroianni suggests that peer review process is worse than no vetting at all because it gives the reader a false sense of confidence. If readers know that a paper hasn't been peer reviewed, we're more likely to read it critically. If "the experts" have already reviewed it, however, then many people will, unquestioningly, accept it as truth.

Weak link or strong link?

An interesting framing from this article is the distinction between the weak-link and the strong-link. If science is a "weak-link" problem, than success depends on the quality of its weakest papers. In that context peer review might make sense, if it could truly prevent bad papers from being published.

In contrast, Mastroianni argues that science is a strong-link problem. In this case, the quality of the weakest papers doesn't matter - they will simply be ignored. Progress depends, rather, on the quality of the best papers. As a result, peer review just adds valueless friction that slows or eliminates the best papers from attaining public awareness.


Ever since the early 2000s, I have been of the belief that "crowd review" on the Internet is far superior to anything that peer review can accomplish, so this article definitely caught my interest. As an alternative, Mastroianni offers the following:

What should we do now? Well, last month I published a paper, by which I mean I uploaded a PDF to the internet. I wrote it in normal language so anyone could understand it. I held nothing back—I even admitted that I forgot why I ran one of the studies. I put jokes in it because nobody could tell me not to. I uploaded all the materials, data, and code where everybody could see them. I figured I’d look like a total dummy and nobody would pay any attention, but at least I was having fun and doing what I thought was right.

Then, before I even told anyone about the paper, thousands of people found it, commented on it, and retweeted it.

Since arriving on the Steem blockchain, I have also thought that this would make a fantastic platform for such a crowd review. A paper could be published on a preprint server and discussed by readers in a Steem community, then (with the right backing) the people who participate in critique could earn blockchain rewards as they helped the author to improve the work.

It amazes me that people like Elizabeth Bik dedicate so much time to finding problems in academic literature with no financial backing at all. How much more could people do if they had support from Steem's blockchain rewards mechanism?

Thank you for your time and attention.

As a general rule, I up-vote comments that demonstrate "proof of reading".

Steve Palmer is an IT professional with three decades of professional experience in data communications and information systems. He holds a bachelor's degree in mathematics, a master's degree in computer science, and a master's degree in information systems and technology management. He has been awarded 3 US patents.


Pixabay license, source


Visit the /promoted page and #burnsteem25 to support the inflation-fighters who are helping to enable decentralized regulation of Steem token supply growth.


If you look at one point of view, then you really perceive it as a fact. And when you start reading other people's thoughts about this (reviews), then a different point of view may arise from the person who read the article.

First I want to wish you a happy new year, in relation to your post, I think that the fact that many people retweet it is an achievement for you, it means that many people were interested, what better reward for a job that takes time and effort, now the Question, I think that the rewards in steem, what I have analyzed is that the rewards are generally obtained by the people who follow us, let me explain, it is an exchange, the followers and those we follow are generally the people involved, the mechanisms could help, but in another system, it is my humble opinion

Congratulations, your post has been upvoted by @scilwa, which is a curating account for @R2cornell's Discord Community. We can also be found on our hive community & peakd as well as on my Discord Server

Manually curated by @abiga554

Felicitaciones, su publication ha sido votado por @scilwa. También puedo ser encontrado en nuestra comunidad de colmena y Peakd así como en mi servidor de discordia

Coin Marketplace

STEEM 0.21
TRX 0.06
JST 0.026
BTC 27986.73
ETH 1775.23
USDT 1.00
SBD 2.93