- American Physical Society Sites
- Meetings & Events
- Policy & Advocacy
- Careers In Physics
- About APS
- Become a Member
For Peer Review Week, Rachel Burley, APS Chief Publications Officer, reflects on the future of one of science’s most vital processes.
By Taryn MacKinney | September 14, 2023
The scholarly publishing industry is shifting at breakneck speed. Emerging technologies, like artificial intelligence, are upending academia and industry. Scientists are producing more papers than ever before.
But at its core, scholarly peer review — when researchers solicit and receive feedback on their papers from other experts — isn’t all that different, says Rachel Burley, APS’s Chief Publication Officer.
“Peer review has been around for many years,” Burley says. “What it's all about, and why we do it, hasn't really changed. It’s always been about ensuring the quality, validity, and reliability of research articles before they're published.”
From Sept. 25 to 29, APS and myriad institutions and researchers are participating in Peer Review Week, a global event celebrating peer review’s value to the scientific enterprise — and debating its future.
APS News spoke with Rachel Burley about the changing landscape of scientific publishing and its impacts on peer review. This interview has been edited for brevity and clarity.
Today, scientists face a strong pressure to “publish or perish,” and the amount of published research has grown enormously over the last few decades. Why? How are these changes affecting peer reviewers?
The peer review crisis is worse in some disciplines than in others, but the mushrooming of research output you described is behind it. In my mind, this started when the mega-journals arrived in the early 2000s. Those publications moved away from selectivity and novelty. They weren't necessarily asking reviewers to look for something new and different; they were saying, “If it's technically sound, we'll publish it.”
I don't think that's necessarily a bad thing, but it created a new set of previously unpublished research. And because, in the open access model, every paper published is potentially revenue-generating, new publishers entered the space. So you have this combination of pressure to publish — the “publish or perish” you mentioned — and more papers being sent for review, because there are more journals with lower selectivity.
This whole ecosystem has led to bigger pressure on peer reviewers, with more manuscripts to review without more time to do it.
How are publishers trying to solve this?
Publishers initially focused their efforts on streamlining the process by finding ways to reduce the time to peer review — for example, by finding the right reviewer for the paper in the first place. And they have tried to increase efficiency through automation or by taking over parts of the peer review process that a peer reviewer can't reasonably be expected to do, like submission checks, to ensure the manuscript is in the best shape possible for the reviewer, so they're only being asked to look at the science.
Many publishers have also invested in reviewer training. In general, there isn’t formal reviewer training — you might be lucky enough to find someone to mentor you through the process, but a lot of reviewers don't know what kind of feedback is required from them.
But the research volumes are such that, even combined, all these efforts don't necessarily fix the problem.
Many argue that a diverse pool of peer reviewers can improve research and reduce bias, including the bias that shapes who gets published. What are journal publishers doing to improve the diversity of that pool?
There's increasing recognition that publishers have an important role to play here. Some publishers are creating reviewer databases that capture not just the researcher's expertise and background but also demographic information to be more inclusive — and they're partnering with organizations that represent underrepresented groups.
Publishers are also experimenting with either double-anonymous peer review, which has been said to eliminate some bias, or fully open and transparent peer review, where the reviewer reports are published with the paper. The argument is that it's harder to be biased if everything's in the open record. But neither has been proven perfect. And in physics, it's particularly difficult because we have arXiv. If people really want to know who's written a paper, they can almost certainly find out.
In your mind, what are the limits of open peer review?
If you're going to be reviewing a paper for somebody well-known in your field and more senior than you, then you almost certainly don't want to critique that paper in a negative way, because that could harm your career prospects. At least, that's how some people would view it.
There’s also time commitment. To have it published openly, you’re going to take more time than you would on something that's confidential between you and the editor. A lot of people feel that time commitment is a big ask, so they would rather not do an open review.
Another challenge for peer review is the rise of interdisciplinary research. How can journals ensure that studies that cross traditional discipline boundaries are evaluated rigorously?
Journals can work to assemble multidisciplinary groups of reviewers. Not everybody's going to have expertise across all the disciplinary areas, but as a group, they have a better chance of covering multidisciplinary research.
And transparency might help. If you can be transparent about how interdisciplinary research is reviewed, then you can build on the credibility of the process. It might mean you provide information, as a publisher or a journal, about the expertise of the pool of reviewers you used and how you incorporated their feedback.
There’s a role for journal editors, too, who can guide the peer review process to make sure they're getting the right feedback on interdisciplinary studies. And editors can help authors by providing clearer explanations of the concepts they're covering — the terminology of the fields, or information that can help reviewers understand concepts in the paper. Peer review is especially important in interdisciplinary research because the readers won’t be expert in everything.
Peer review might seem especially slow in physics because of arXiv.org, where preprints are quickly uploaded. How are publishers thinking about the speed of peer review?
In physics, it’s commonplace to post your original manuscript to arXiv for feedback before or during the publication and submission process.
The focus on speed of publication has caused publishers to get creative — create automations, outsource some elements of the manuscript assessment, monitor and reduce the turnaround times at each phase.
But the downside of rapid peer review and publication is that some things are published that shouldn't be, and there’s a risk of increased research fraud, which can fall through the cracks.
Do you think publishing can be too fast?
There's a balance to be had between speed and rigor. How do you make publication faster, while making sure you can trust it and there has been rigor around peer review? Of course, that's the role that journals have traditionally played. You have expert teams working for trusted journal brands saying, "Here's what's worth reading and that we're validating.”
Is it perfect? No, but a perfect alternative has yet to be found.
Open access publishing is growing, bolstered in part by the White House's announcement last year that federally funded research must be available to the public by the end of 2025. How are these shifts impacting peer review?
Most publishers are in the process of transitioning to open access including APS. We have open access and hybrid options in the Physical Review journals, and we participate in the Sponsoring Consortium for Open Access Publishing in Particle Physics.
While open access enhances accessibility, it also requires funding models. In the US, the White House and the agencies have not said they're advocating for any one business model, but it's clear that the “green route” to open access — depositing an author-accepted manuscript immediately on publication without a 12-month embargo — relies on the subscription model. As more content becomes open, that model becomes unsustainable.
It leaves academic publishers in the situation the industry is in now, where we're trying to work through what a sustainable funding model looks like to ensure that we can continue to conduct rigorous peer review in an open access world.
How is artificial intelligence shaping peer review? What are its benefits and risks?
There are beneficial uses for AI, if done carefully, like automating various aspects of the process — matching manuscripts with the right reviewers, identifying potential ethical issues, assessing the language quality and writing. All these things can be done reliably with AI now, and they can increase efficiency and take those tasks away from editors and reviewers, to allow them to focus on the science.
There's also the possibility that AI becomes so good that it actually can do peer review. Of course, nobody believes that right now, but we also didn't believe that open AI would be at the stage it is today. ChatGPT is passing college exams.
The challenge, though, is that AI algorithms can inherit biases from the data they're trained on. It could lead to even more bias, like biased reviewer recommendations. We have to ensure we're making efforts to eliminate that and reduce unintended bias.
There are also ethical considerations around privacy and data security and transparency. Authors and reviewers need to be aware of how their data is being used and who has access to it.
And there are some things AI tools are still not capable of doing — evaluation that you need human judgment for. AI algorithms can't yet determine what's novel or groundbreaking. They’ve been trained on existing research, and it's new discoveries we're looking for.
Taryn MacKinney is the Editor of APS News.
©1995 - 2023, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.
Editor: Taryn MacKinney