Editorial

Impoverishing peer review

Long, long ago, when we were all much younger, academic authors committed their wisdom to manuscripts and their manuscripts to academic journals. Editors would then consult with experts in the subject of the submission, who would share their opinion with editor and author alike. If editors and referees decided the submission – probably after some revision – had something to add to the world’s knowledge, it would be passed on to the journal’s grateful publisher for the very separate process of publication. Authors, editors and referees worked in happy harmony, only too glad to play their part in the collegial obligation of academics and in the social responsibility they shared with publishers to enhance the common intellectual weal.

So much for the fairy tale; peer review was never quite like this and bears little resemblance to it now.

These days, journals have real trouble attracting expert referees (Trumbore, 2024)

My most dismal record … contacting 13 potentials to obtain a single review (editor quoted in Karabag and Berggren, 2016, p.17)

Their response has been to reduce the need for expertise by relying on desk rejection by junior members of editorial staff – by scrutinising submissions for standard citations, for example:

These are scholarly pieces that self-described IB [international business] scholars typically know well and that are foundational to the field. Reviewers of IB research manuscripts should actually know such classic pieces well, and mostly do. (Verbeke et al., 2017:p.2)

Out go submissions that will not contribute sufficiently to the journal’s impact factor (JIF): submissions that do not cite other papers in the journal (Pearson and Sharma, 2015), submissions with few co-authors to self-cite, submissions with short reference lists (see Corbyn, 2010Harzing, 2013). Most submissions now go nowhere near an external referee. What tends to be published, then, is what there is least reason to reject:

I would like to thank you for your ongoing support of EMJ [the European Management Journal], and update you on how well the journal is going. … The rejection rate has raised [sic] from 80% to 86%. (email from editor of EMJ to author, November 2010)

Decades ago, an 80% rejection rate was thought too high for peer review to make much sense simply because of random error (Ceci and Peters, 1982). At 90%, the system was considered just too noisy to work (Miner, 2003). Top journals now boast a rejection rate of something like 95%.

The reluctance of academics to referee and the need of publishers to underwrite the quality of their product with peer review has forced publishers to intervene in peer review, a serious transgression of the commercial into academic territory. Publishers are particularly anxious to allay the suspicion that peer review is often no more than random with assurance that it remains the cornerstone of academic publishing. Academics, they claim, enjoy refereeing so much that they do it for free:

Most researchers give up time to review papers for no charge. … Why do they do it?… [They] quite simply do it because they enjoy being able to improve papers. (Robert Campbell, chairman of the Publishing Research Consortium, as quoted in Sense About Science, 2009)

Publishers have repurposed their manuscript submission systems (MSMs) to recruit referees, the tacit threat being that the submitting author can hardly expect his own submission to succeed without volunteering to referee. The largest publishers may pool their volunteers to find referees for their other journals, a misuse of data collection that is generally illegal. It is hardly surprising that an MSM designed for one purpose often fails in other tasks and many putative referees find themselves allocated submissions on subjects about which they know nothing. Automated systems (including artificial intelligence, judging from two papers in this issue) are exceptionally good at misunderstanding. Your editor rues the day he published ‘Milking the myth: innovation funding in theory and practice’, a paper on the exploitation of Silicon Valley’s reputation in other countries. It initiated a stream of requests to referee submissions on the dairy industry.

Academics now find themselves on publishers’ mailing lists, receiving unsolicited requests to referee submissions which bear no relation to their interests. Is it possible that some of these bulk-mailed invitations are accepted by academics unwilling to let their ignorance stand in the way of adding another line to their cv? To be sure, predatory publishers may be more anxious that submissions be refereed than that they be refereed by someone competent. Now that established publishers are adopting the ways of the predatory, especially in demanding an article processing charge (APC) direct from authors, there is diminishing interest in the niceties of peer review. It is not unknown for editors never to see referees’ reports (Rhodes and Linnenlueke, 2025). Publishers need paying customers and attract them by advertising how very few days their peer review takes – sometimes with the tacit understanding that peer review will not be an obstacle to publication. They are quick to replace editors and editorial boards that cannot achieve rapid turnaround. Even formerly respectable publishers are quite open in their resort to ‘peer review lite’, referees instructed to consider whether a submission is correct and to ignore whether it is of any importance (Spezi et al., 2017).

History has been reinvented to provide evidence that peer review has been fundamental since the earliest academic journals of the seventeenth century. Well, a seventeenth-century editor may have occasionally consulted a friend, but peer review was never institutionalised, and anything even remotely like peer review disappeared in succeeding centuries. An endless procession of experiments (post-publication review, perpetual peer review on social media, non-stop training of academics in how to referee, etc.) is being carried out with the express aim of improving the peer review system and the underlying reassurance that the system is actually worth improving. ‘Not yet peer reviewed’ has become a universal caution attached to papers in online repositories or supporting media stories, as if peer review can be relied upon to guarantee papers.

Despite the assurance of publishers that peer review is ever the stalwart of academic publishing, academic publication has changed and peer review with it. The value of an academic paper no longer lies in its content, but in the paper’s very existence and the APC paid for its publication. Citations to a paper attest to its quality as much as peer reviewing and citations can be gamed with no call to alter paper content.

I’ve been on a number of search committees. I don’t remember anybody looking at anybody’s papers. Number of IF [impact factor] of pubs are what counts (Terry McGlynn as quoted in Smaldino and McElreath, 2016)

Where does this leave the referee? At least some referees feel that they should be practical and judge a submission’s quality in terms of its potential to be cited:

We … [used] … to make our acceptance criterion those articles that we felt would make a contribution to the international literature. Now our basis for rejection is often ‘I don’t think this paper is going to be cited’. (editor quoted in Chew et al., 2007, p.146)

By 2006, even Eugene Garfield, who devised the journal impact factor, was admitting that citation was a poor measure of anything. Academics were also coming to the conclusion that there was no relationship between citation and paper quality [e.g., Nieminen et al., 2006].

I’ve found little correlation between articles that changed the world and number of citations to them … (editor quoted in Chew et al., 2007, p.147)

Too late; a vast higher education industry had come to depend on publication metrics to measure and reward the performance of academics. And a massive academic publishing industry had arisen, based on the collection of publication data and the calculation of metrics rather than just the publication of journals. The Great Citation Fraud was a consequence of the ease with which citation could be gamed and profits made. Peer review and citations could be mixed in a single mindless mess:

The journal below [Human Relations], with which you have been involved as a Reviewer … increased [sic] in Impact Factor. This increase is testament to the commitment and expertise of our … Reviewers and highlights the increasing quality of the articles the journal publishes. (email to author from Sage, August 2013)

The most most-cited academics could sell their services in this new unregulated market and enable even obscure universities to be ranked best in the world.

Without me, the University of Cordoba is going to drop 300 places [in the Shanghai ranking]. They have shot themselves in the foot. (Rafael Luque as quoted in Ansede, 2023)

Of the 6849 authors on its Highly Cited Researchers list in 2023, Clarivate – the company that calculates these things – deemed over 1000 to be fraudulent (see Jack, 2022).

Both the higher education and academic publishing industries depend upon a basic assertion remaining unchallenged – that citations, even though gamed, actually reveal academic quality. As with every notorious swindle from Dutch tulips in the seventeenth century to the DotCom Bubble in the early twenty-first, greed exceeds scruples. Paper content became detached from paper quality. Even authors were not always familiar with the content of their papers (see Macdonald, forthcoming). A whole menagerie of animals was apparently capable of producing quality papers. Peer review has not been an obstacle to the publication of papers by Larry the Cat (Richardson, 2024). Ike Antkare (with an h-index of 94 in 2010) has never lived and yet still continues to publish papers of quality (Labbé, 2020).

To be sure, there are good souls who have enough faith in an academic collegiality to labour on a submission for hours, advising author and editor how it might be improved. They swim against the tide: academic publishing is largely concerned with generating private rather than public benefit – and this includes peer review. Even the belief that peer review was once, and still is, collegial has been constructed for commercial ends. Peer review is part of a publishing production line, much like proofreading or copy-editing, demanding skill, but not expertise and certainly not loyalty to any higher end. Recognition of this stark reality would seem to be essential to effecting the change that academic publishing so desperately requires.

But enough rambling about peer review. Prometheus sticks to a traditional system of the editor selecting two referees for each submission based on their knowledge of the submission’s subject. In common with other journals, we find attracting suitable referees a huge problem and often search outside academe. And like others, we persevere with peer review, not just because it can be useful in itself, but also because it is a valuable reminder of just how vulnerable is academic publishing to exploitation by private greed.

Syed Arnan Hossain, Md. Ashraful Alam and Md. Mostafa Mushfiq Talukder look at the importance of the smartphone in Bangaladesh. They are not particularly interested in smartphones as a means of communication, but rather their interest lies in the smartphone’s symbolic value, especially for young workers. They find that the smartphone is not just a status symbol, but rather essential evidence of the character and competence of its owner, especially for employers. This impression among phone owners may be misguided, but it can be strong enough for them to take out bank loans so that they may sport the very latest in smartphone technology. Hossain et al. find a new dimension to the role of technological innovation.

Kayoko Nohara, Betti Marenko, Giorgio Salani and Kohei Kanomata explore an Anglo-Japanese training programme conducted at the Institute of Science Tokyo in collaboration with Central Saint Martins, University of the Arts London. The programme is based on a translation studies approach to what the authors call ‘hybrid innovation’, a blend of design thinking and a variety of approaches to innovation found in art, science and the humanities. Where it is necessary to cope with foreign notions within a culture, translation studies helps participants engage with the unknown. Hybrid innovation functions as a mode of learning, cultivating creativity, critical thinking and reflective practice. It is a novel model for knowledge transfer and the development of innovative mindsets in both the corporate and academic worlds.

In their study of generative artificial intelligence, Jose Proenca and Carmen Ramos Vera compare the performance of five popular large language models (LLMs). Qualitative analysis finds significant variation in LLM performance. In general, LLMs excel at retrieving relevant literature, summarizing content and generating insights, but they are inconsistent in contextual comprehension, coding accuracy and depth of critical analysis. Recommendations emerge in the paper on how a solid foundation for responsible human–AI collaboration would allow both researchers and practitioners to make better use of LLMs.

Brian Martin is also interested in LLMs and particularly in their ability to plagiarise. But Martin has little time for the simplistic notion that plagiarism is merely the copying of the creative endeavours of others without permission or acknowledgement. He argues that LLMs actually commit a new sort of plagiarism quite different from typical human plagiarism in a range of significant characteristics. This he calls ‘automated plagiarism’. Martin outlines the ability of LLMs to plagiarise in a vast array of ways. He also discusses how woefully inadequate is our existing system of giving automatic rights over creative property. What incentive to be creative, asks Martin, is there in affording the creator protection 70 years after his death? What sort of template can this possibly offer for regulating the automated plagiarism of LLMs?

Stuart Macdonald

General editor