9 August 2022
Peer review and its many practical challenges
Is our trust in peer review misplaced? A look at its foibles, why journals find it difficult to ensure good peer review, and how it is undermined by pressures to publish.
Siti Nurleily Marliana1,* and Joaquim Baeta2
Peer review is hailed as the bastion of science, the great filter through which all research must go, where every author is judged and scrutinised by their peers before being deemed suitable for publication. Each journal has their own approach to it, but whether it be through a double-blind, open, or post-publication review, this is the core concept: science is upheld by the scrutiny of one’s peers.
Now, the history of peer review is one long, meandering branch too long for us to discuss here. Its meaning and extent has evolved significantly over time, from millenium-old precursors to the modern idea that took shape only after World War 2.
Discussions on its various foibles are also not new. By now, researchers have assessed every aspect of peer review, and even when it’s found wanting, proponents return to the common trope of its relationship with democracy: "full of problems but the least worst system" we have. Even so, the existence of Retraction Watch is testament to science’s precarious relationship with ethics, and the challenges of maintaining ethical standards in scientific publication. This is a constantly ongoing struggle, and even the idealist cannot say that peer review alone guarantees quality.
Ironically, at the heart of peer review is trust. We trust that the people undertaking it are competent and impartial, but therein lies its principal flaw: it is a fundamentally human endeavour.
In this article, we will reveal peer review's practical issues and the real-world, lived experience of needing to deal with its foibles every day.
Bad peer reviewers exist
This first one is simply that bad peer reviewers exist. Any experienced researcher should be familiar with the infamous Reviewer No. 2. This dastardly coward goes about reviewing papers of a myriad subjects, none of which he has any clue about, misreading what the authors write, coming to his conclusions independent of the authors’ results, slamming the authors for failing to provide data on the thing they did not research… he is truly the most wicked of all academic devils.
Well, Reviewer No. 2 does indeed exist, and we cannot always avoid them. Some people are just plain bad at the task of reviewing. Not everyone can do it, even if they have expertise in the relevant field, and unfortunately, we don’t know that they can’t do it until we’re on the receiving end of their incompetence.
If we’re lucky, we get to submit our paper to a journal that is excellent at finding reviewers, but how many journals is that, really? (We’ll find out.) Review No. 2 is a wildcard—we don’t know when they’re going to show up and ruin our day.
Speaking of which: peer review is, again, a fundamentally human endeavour, and everyone has a bad day now then. Maybe they woke up on the wrong side of the bed, they’re suffering from some annoyance at the office, or maybe they’re even going through a bad breakup or just had a fight with their partner. They could just be feeling unwell that day, and an upset stomach results in less attention to detail than usual.
In spite of the rigours we’d hope for in the peer review process, we just cannot discount the fact that humans aren’t always going to have the best of days when they’re reviewing an article, and this simple fact can undermine the entire process—and ultimately, your submission.
Journals are not built equally
For journals’ part, the biggest thing that people need to know is that not all journals are built equally. When we apply the ideals of peer review to publications, we don’t have "lowly," small journals in mind; we’re thinking of Nature and Science. For better or worse, the journals with the best reputations have the opportunity to source their reviewers from the largest and best pool. The less eminent the journal, the smaller the reviewer pool gets, and as a consequence, the less likely they are to get the most thorough and rigorous review.
Obviously, the vast, vast majority of journals are not Nature, and it’s likely that you’ve never heard of the majority of journals in your own field, which shows the inherent disadvantage they face by default. The majority of journals wind up having fewer resources than their larger competitors, and some have barely any, at all.
You would be surprised to know how many journals are just barely scraping by from issue to issue, reviewer to reviewer.
At the same time, there is no logical reason to assume that a smaller journal will have the editorial quality as a larger journal. They absolutely can, but in my experience, journals are not as robust, editorially, as we expect them to be, even the bigger ones. Now, this robustness can apply to the entirety of the editorial process—from processing submissions to making the correct decisions on them in a timely manner—but for our purposes, it includes the simple task of picking the right reviewer or voiding their recommendation if it turns out it was made for the wrong reasons.
The sad case for authors is that they’re reliant on journal editors picking the appropriate reviewers for their work, even when they’re asked to suggest reviewers themselves, and that’s not always the case. Maybe it’s just the price of doing business in science, but the price ends up being rejection, wasted effort, and potentially a blow to your mental health.
Some journals are just straight-up unethical
"Predatory journal" is a loaded term nowadays, and the concept itself is mired in controversy. Not all so-called "predatory" journals are truly predatory; they can just be ethically questionable or lack the resources (yet) to be 100% ethical. But, because predatory publication is profit-driven, we can guess that they have little interest in good, thorough reviews that elongate the time between submission and publication. It’s possible for a publisher to be both profit-driven and ethically minded, but when you see a paper that’s been accepted for publication, say, a week after submission, it ought to bring up questions on its peer review process.
Outside of journals that focus on quantity over quality, there exists a subset of journals that quietly engages in unethical practices. These practises can vary: maybe a reviewer is told ahead of time to be lenient on a particular paper, or maybe to reject it; a member of the editorial team pretends to be an external reviewer because they cannot find someone willing to conduct the review; or a journal claiming to use double-blind peer review doesn’t remove the identities of authors from review copies. These can be low-level mistakes made by inexperienced editors, but the effect is the same: the journal’s integrity may be called into question.
A further subset isn’t inexperienced or incompetent—it’s purposefully unethical. I’m going to avoid gossip or naming names here, but both the Directory of Open Access Journals and Scopus catalogue the journals they remove from their respective index. These journals are removed for various reasons. In the case of DOAJ, this can include basic things like broken urls and inactivity, but also "suspected editorial misconduct" by the publisher. Scopus may simply cite "publication concerns." These are vague references, and I need to be clear that no indexing service is the ultimate arbiter of quality, but these lists are nevertheless useful in helping us to identify potential issues in journals, especially if we’re looking to submit to them.
There are more authors than peer reviewers
Thank publish or perish and dumb university metrics for this. "Science analytics" companies peddle tools that promote the quantification of researchers’ output, and institutions pay a lot of money to gobble up these tools so they can put a number on your livelihood.
Lecturers need to publish consistently, irrespective of whether they have anything actually worth publishing, and… this is the more problematic part, there are universities that require students to publish a paper as part of their degree.
This fact, viewed broadly, isn’t because universities have an interest in developing students’ writing skills, but because they’re (secretly) competing with each other to be the x institution with the y most things—most publications, most authors, most anything that will garner some sweet, easy, quantifiable praise (and other things).
It’s great for "me," the hypothetical individual student, that I have the opportunity to publish my first paper, but there are two effects from this trend:
In our experience, most students just don’t produce very good work, and for good reason: they’re still learning and making mistakes as they go. Unless you’re gifted an awesome supervisor who can put you on the path to instant success, you’re going to be trying to publish something that has likely already been done before—and better—by a more experienced researcher.
Journals are, in the end, inundated with work from students who are submitting as part of a graduation requirement, not a genuine scientific need to be published. Eventually these papers, which are, again, replete with the errors developing scientists should be expected to make, will get published. The only question is where. The worst papers trickle down to the worst journals (yes, those exist), likely with the most accepting, least rigorous reviewers, or to conference proceedings, which in many (but not all) cases have a lower standard for acceptance.
For journals that try to have a good editorial process, they are saddled with more papers than they can handle, because there’s always going to be more students than lecturers, and more authors than reviewers. The knock-on effect of publish or perish and an academia driven by metrics is that journal editors and reviewers are spread thin. Editors are given less time to focus on individual manuscripts, leaving potentially groundbreaking research to fall by the wayside, while reviewers are burdened with portioning their limited free time to an ever greater number of papers.
(Most) reviewers are unpaid
This depends on the kind of journal or country, but many (or most) reviewers are providing free labour—they’re volunteering their time to review a manuscript, and that means we cannot expect them to give more than just that: volunteered time. No matter how many meticulous hours you dedicated to your research, the slog of writing about it, or the anxiety of submitting the whole damn thing, you’re beholden to a reviewer who doesn’t have the luxury of spending their entire work day on reviewing it, poring over every data point, finding every error, and writing down every way in which you can improve your paper.
Reviewers who will dedicate a good portion of their free time to the task certainly exist, but so do those who will quickly scan the content and results and then submit a bunch of bullet points of things to fix. And sure, those bullet points might be helpful, but the published output is commensurate with the input, meaning that a sloppy review doesn’t really help a journal that’s trying to decide which of its hundreds of submissions is worth publishing.
Finally, we have conflicting reviews. These are normal and happen occasionally. Their existence doesn’t invalidate the entire concept of peer review, but they do highlight its fraughtness. Ideally, in such a situation, a subsequent reviewer is enlisted, but whichever side that reviewer comes on, you cannot eliminate the minority view. Editors need to have good judgement in how they handle conflicting reviews, knowing when a reviewer should have been more thorough, when a reviewer’s viewpoint is justified (or not), and what additional steps to take in these events.
It comes down to trust and faith in the journal’s process. We can trust that a journal knows how to handle conflicting reviews effectively, but cannot know for sure. And not every editor has the time or experience to do so.
Alternative ways forward
This returns us to the common trope of peer review; "full of problems but the least worst system" available. Is that really the case? There are alternative approaches or solutions to peer review, and each one suggests that, no, traditional peer review as we know it is not the best system available. Like all systems—and science itself—it must continue to evolve and adapt to the problems of the time.
Open peer review
The first I’d like to highlight is open peer review. There’s no fixed definition for "open peer review," but we can consider it broadly to be a review approach that is more transparent and accountable than traditional peer review. This approach has already been used by a number of publishers, such as PLOS, BMJ, and Nature Communications.
Open peer review can have these components:
It’s not blind, so authors and reviewers know each other’s identities, and the reader knows the identity of the reviewer.
The content of the review is published alongside the paper.
The paper is published on a network-based platform, such as a forum, to enable direct participation of the wider scientific community. This means open peer review can include post-publication peer review, meaning it is reviewed after the fact.
Decoupled peer review, wherein the review process is separated from the publisher. Here, an author has their paper "pre-reviewed" through an independent service and then submits an already-reviewed paper to a journal, or the journal approaches them (through this service) with an offer to publish their paper.
The paper may be uploaded in a preprint server for open commentary from other scientists ahead of submission to a journal.
Of its pros and cons, open peer review may be faster than traditional peer review, and greater transparency can encourage more constructive reviews. It should also be easier to detect conflicts of interest when everyone’s identity is known. Reviewers, meanwhile, also get credit for reviewing papers. On the other hand, the purpose of anonymity in peer review is to prevent bias in reviewers or retaliation against them from disgruntled authors. Reviewers may also feel uncomfortable giving critical feedback if they know their review will be read by their colleagues.
Open peer review is a constantly evolving solution and there’s a lot more to learn about it if you are interested.
Abolishing pre-publication peer review
A second alternative is to, well, abolish pre-publication peer review altogether. This is what Remco Heesen and Liam Kofi Bright, for example, propose as a remedy for the time and effort taken up by peer review. In their proposal, authors publish their work on a preprint server and then publish updated versions that reply to peers’ questions and comments. This is a form of post-publication peer review, and journals’ functions here will be to publish curated collections of articles found on these preprint servers, rather than receiving submissions directly.
One interesting aspect of their proposal is infrastructural. For example, if peer review is largely incapable of detecting fraud (whether because reviewers are reluctant to accuse an author of malpractice or simply aren’t equipped to identify it), then abolishing it actually encourages external steps to prevent fraudulent behaviour, such as incentivising "getting it right" to the same extent that "getting it published" is, which, under traditional peer review might not be getting the necessary attention. A link to their paper is in the references, and we recommend reading it, so you can make your own conclusions about this idea.
Ultimately, the reality for editors and authors alike is that they must continue to work within the system provided to them, even as they are trying to change it—that is, unless they abandon academia altogether and decide to work outside its impositions and comforts. The burden on young scientists trying to make a start in their careers is greater than ever, and they have little choice but to follow the rules, their freedom limited to which journal will accept their work. Here, the balance of power instead lies with publishers, and the lucky few editors who have a true say over the running of their journals. They have the power to look at new avenues, methods, and alternatives, to experiment, and to implement them in their own journals. Whatever our views on peer review, there must be an agreement that the zeitgeist in science is ever-changing and amorphous. What worked before is not guaranteed to work today or tomorrow. Our loyalty ought not to be to peer review itself, but whichever form of it that best embodies and ensures the efficient and honest dissemination of research.
Brembs B. 2018. Prestigious science journals struggle to reach even average reliability. Front Hum Neurosci. 12. doi:10.3389/fnhum.2018.00037.
Breuning M, Backstrom J, Brannon J, Gross BI, Widmeier M. 2015. Reviewer fatigue? Why scholars decline to review their peers' work. Polit Sci Polit. 48(4):595–600. doi:10.1017/S1049096515000827.
Chubin DE. 1985. Misconduct in research: an issue of science policy and practice. Minerva. 23(2):175–202. doi:10.1007/bf01099941.
D’Andrea R, O’Dwyer JP. 2017. Can editors save peer review from peer reviewers? PLoS ONE. 12(10):e0186111. doi:10.1371/journal.pone.0186111.
Donovan B. 1998. The truth about peer review. Learned Publ. 11:179–184. doi:10.1087/09531519850146346.
Dupps WJ, Randleman JB. 2012. The perils of the least publishable unit. J Refractive Surg. 28(9):601–602. doi:10.3928/1081597X-20120815-05.
Ebadi S, Ashtarian S, Zamani G. 2020. Exploring arguments presented in predatory journals using Toulmin’s model of argumentation. J Acad Ethics. 18:435–449. doi:10.1007/s10805-019-09346-0.
Edie AH, Conklin JL. 2019. Avoiding predatory journals: quick peer review processes too good to be true. Nursing Forum. 54:336–339. doi:10.1111/nuf.12333.
Elkin J. 2004. The impact of the Research Assessment Exercise on serial publication. Serials. 17(3):239–242. doi:10.1629/17239.
García JA, Rodriguez-Sánchez R, Fdez-Valdivia J. 2015. The author–editor game. Scientometrics. 104:361–380. doi:10.1007/s11192-015-1566-x.
Glonti K, Boutron I, Moher D, Hren D. 2019. Journal editors’ perspectives on the roles and tasks of peer reviewers in biomedical journals: a qualitative study. BMJ Open. 9(11):e033421. doi:10.1136/bmjopen-2019-033421.
Groves T, Loder E. 2014. Prepublication histories and open peer review at The BMJ. BMJ. 349:g5394. doi:10.1136/bmj.g5394.
Gu X, Blackmore K. 2016. Recent trends in academic journal growth. Scientometrics. 108:693–716. doi:10.1007/s11192-016-1985-3.
Heesen R, Bright LK. 2021. Is peer review a good idea? Br J Philos Sci. 72(3):635–663. doi:10.1093/bjps/axz029.
Higginson AD, Munafò MR. 2016. Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLoS Biol 14(11):e2000995. doi:10.1371/journal.pbio.2000995.
Horton R. 2015. Offline: what is medicine’s 5 sigma? Lancet. 385:1380. doi:10.1016/S0140-6736(15)60696-1.
Kampourakis K. 2016. Publish or Perish? Sci Educ. 25:249–250. doi:10.1007/s11191-016-9828-4.
Kapp C, Albertyn R. 2008. Accepted or rejected: editors’ perspectives on common errors of authors. Acta Acad. 40(4):270–288.
Lynch K. 2014. Control by numbers: new managerialism and ranking in higher education. Crit Stud Educ. 56(2):190–207. doi:10.1080/17508487.2014.949811.
Marliana L, Baeta J. 2019. Outwitting the fox: an introduction to predatory publishing, its controversies, and how to navigate the wilds of scholarly communication. All is Found... in Time. [accessed 2022 Jun 8]. https://allisfoundintime.com/article/outwitting-the-fox.html.
Marušić A, Marušić M. 1999. Small scientific journals from small countries: breaking from a vicious circle of inadequacy. Croat Med J. 40(4):508–514. http://www.cmj.hr/1999/40/4/10554353.pdf.
McLeod A, Savage A, Simkin MG. 2018. The ethics of predatory journals. J Bus Ethics. 153:121–131. doi:10.1007/s10551-016-3419-9.
Nieminen P, Uribe SE. 2021. The quality of statistical reporting and data presentation in predatory dental journals was lower than in non-predatory journals. Entropy. 23(4):468. doi:10.3390/e23040468.
Rittman M. 2018. Opening up peer review [blog]. MDPI Blog. [accessed 2022 Mar 28]. https://blog.mdpi.com/2018/10/12/opening-up-peer-review.
Ross-Hellauer T, Görögh E. 2019. Guidelines for open peer review implementation. Res Integr Peer Rev. 4:4. doi:10.1186/s41073-019-0063-9.
Thurner S, Hanel R. 2011. Peer-review in a world with rational scientists: toward selection of the average. Eur Phys J B. 84:707–711. doi:10.1140/epjb/e2011-20545-7.
Vines T, Rieseberg L, Smith H. 2010. No crisis in supply of peer reviewers. Nature. 468:1041. doi:10.1038/4681041a.
Siti Nurleily Marliana and Joaquim Baeta. This article and the video therein are published under a Creative Commons Attribution-ShareAlike 4.0 International License.