Predatory publishing in management research: A call for open peer review

Predatory journals have emerged as an unintended consequence of the Open Access paradigm. Predatory journals only supposedly or very superficially conduct peer review and accept manuscripts within days to skim off publication fees. In this provocation piece, we first explain how predatory journals exploit deficiencies of the traditional peer review process in times of Open Access publishing. We then explain two ways in which predatory journals may harm the management discipline: as an infrastructure for the dissemination of pseudo-science and as a vehicle to portray management research as pseudo-scientific. Analyzing data from a journal blacklist, we show that without the ability to validate their claims to conduct peer review, most of the 639 predatory management journals are quite difficult to demarcate from serious journals. To address this problem, we propose open peer review as a new governance mechanism for management journals. By making parts of their peer review process more transparent and inclusive, reputable journals can differentiate themselves from predatory journals and additionally contribute to a more developmental reviewing culture. Eventually, we discuss ways in which editors, reviewers, and authors can advocate reform of peer review.

available to anyone. Such Article Processing Charges (APCs) are paid by the authors of a manuscript, their institutions, or third-party funding bodies. Open Access publishing has the potential to make the dissemination of academic knowledge faster and more equal, since new results are immediately available to readers around the world, independent of their ability to financially contribute to the academic publishing system (Suber, 2016). However, the APC-based Open Access system also brings new problems for academic knowledge production. One of them is the rise of predatory Open Access journals (Harzing and Adler, 2016).
Predatory Open Access journals accept submitted manuscripts very quickly (sometimes within a few days) and only supposedly or very superficially conduct peer review, in order to skim off as many APCs as possible (Xia, 2015). The emergence of predatory Open Access journals is fueled by growing institutional Open Access funds and growing pressure on academics worldwide to publish in international and peer reviewed journals (Beall, 2013(Beall, , 2018Djuric, 2015;Omobowale et al., 2014). Predatory publishing surfaces problems that arise from path dependencies and market concentration in the academic publishing business. It also makes visible problems with the global institutionalization of peer review, which over the last decades has developed into "a unifying principle for a remarkably fragmented [academic] field" (Biagioli, 2002: 34). The more the ideal of international peer review gets exported into academic fields that have traditionally assessed academic quality through other means (e.g. in the Global South), the greater the demand for publishing outlets that are able to link local cultures of knowledge production with the ideal of international, blinded peer review.
Since the early 2000s, the number of Open Access journals has increased rapidly. In 2002, the Directory of Open Access Journals (DOAJ) listed 33 journals. In 2019, it lists about 12,000 reputable journals. With a temporal lack, this growth has been matched by the market for predatory journals as well. Based on estimations, the "population" of 1800 predatory journals in 2010 has grown to more than 8000 in 2014. For the year 2014, Shen and Björk (2015) estimated the size of the predatory publishing market at around 74 million USD, compared to 244 million USD for reputable Open Access journals and 10.5 billion USD for the entire global subscription market for scholarly journals.
To date, most research on predatory publishing has focused on the natural and life sciences. With this provocation piece, we intend to fuel a nascent debate on predatory publishing in the large field of management research. Although reflexive debates about the methods, tools, and infrastructures of knowledge production are an integral part of management research, management scholars only recently began to examine the implications of Open Access publishing for their field (Beverungen et al., 2012;Thananusak and Ansari, 2019). Although most of these accounts praise the potentials of Open Access outlined above, they also agree that in order to address its challenges, management scholarship needs "new governance mechanisms to control and guarantee the quality of such new publishing options" (Harzing and Adler, 2016: 156). In this context, predatory Open Access journals manifest the more fundamental question of what makes a "qualitative" contribution and the debate about the legitimacy of dominant, mostly Western ideals of knowledge production and evaluation vis-à-vis alternative epistemological ideals (e.g. Grey, 2009;Wedlin, 2011).
In this article, we respond to this call for new governance mechanisms in four steps. First, we outline the most pervasive threats that predatory publishing poses for management research. Second, we empirically assess the spread of "predators" in the field of management journals and find that it is about time to launch measures for population control. Third, we propose open peer review as a new mechanism to govern the production of management knowledge. Open peer review, we argue, comes with two benefits to our profession: increased transparency of "serious" peer review makes it easier to identify and de-legitimize predatory journals, which are unable to create transparency of their "fake" peer review. Increased dialogue between authors, editors, reviewers, and other interested parties can increase rigor and relevance of management research. Eventually, we conclude with a discussion on the more general question how digital transformations (including the ones underlying Open Access publishing) change the dynamics of academic knowledge production.

Threats of predatory publishing for management research
Why should management research care about predatory publishing? And are there threats of predatory publishing that are specific to management research as a social science, in contrast to the natural and life sciences? Besides the obvious criticism that predatory journals hunt for resources especially in countries that lack sufficient academic funding anyway (Omobowale et al., 2014), we see two more ways in which predatory journals can harm our academic community. Both ways are directly linked to what the philosophy of science describes as the "problem of demarcation" (Popper, 1934): the question on how to distinguish legitimate from illegitimate scientific knowledge.
First, predatory journals are a threat to the field of management research because they can be used strategically to legitimize management ideologies, morally questionable business models, or discriminatory HR practices. When journals claim to perform peer review, but refrain from doing so, they provide an ideal infrastructure for the "sciencewashing" of idiosyncratic ideas. While members of the academic community can at least draw on the tacit reputation of a journal as an indicator of academic quality (which of course is also not without caveats, see Macdonald and Kam, 2007), non-academic actors such as journalists might lack this in-depth knowledge about a domain and trust the existence of the peer review label as a signifier for academic rigor. While it seems unlikely that members of a non-academic audience directly import concepts or practices from such journals, the proliferation of problematic ideas and terminology (e.g. racist or sexist) in ostensibly peer reviewed journals might shift normative baselines for what can and cannot be said in other social arenas including reputable journals.
Second, predatory journals are a threat because they can be used to de-legitimize the management discipline (or sub-disciplines) through bogus articles. In 1996, the US-American physicist Alan Sokal famously published a bogus article in the academic journal Social Text (Sokal, 1996b), which he subsequently publicly disclosed as a performative act to criticize the intellectual rigor of postmodern cultural studies ("Sokal affair"; Sokal, 1996a). In a similar but more recent case, the philosopher Peter Boghossian and the mathematician James Lindsay orchestrated an attack on the gender studies discipline with their bogus article "The conceptual penis as a social construct," which they published in the interdisciplinary and seemingly predatory journal Cogent Social Sciences (Boghossian and Lindsay, 2017). In their attempt to de-legitimize the discipline, they directed attention to an overly shallow and unscientific peer review, but failed to reflect or maybe even deliberately obfuscated the questionable nature of the journal they targeted. Due to editorial decisions, fields like gender studies are marginalized in reputable journals as well. For example, we find polemic attacks on certain fields in both, reputable and predatory journals. However, the case of the "conceptual penis" shows that predatory journals can additionally be used as a stage on which strategic hoaxes can be performed quite easily. The subliminal skepticism toward certain fields, fostered by the exclusionist practices of reputable journals, creates a receptive audience for such hoaxes within and beyond academia.
Both threats, the legitimation of non-scientific ideas and the de-legitimation of the discipline as non-scientific, arise from the paradoxical situation that the more is known about predatory journals, the more difficult it seems to demarcate them from (some) reputable journals (Teixeira da Silva, 2017). To illustrate why new governance mechanisms for the production of management knowledge are needed, we propose a classification of Open Access journals based on their operating procedures (Table 1). While all three types in our classification show below-average quality, only junk and fake journals are predatory in nature. The classification hence points to the governance of the peer review process as a way to curb the rise of predatory journals.
Aspirant journals are of below-average academic quality, mainly because they are not able to build up a relevant community, attract respected editors and reviewers, and therefore, high-quality manuscripts. Aspirant journals have a below-average peer review, but are not predatory. Even though they charge APCs, they do so not exclusively to maximize profits, but have an academic agenda, as well. An example of an aspirant journal could be some student-run journal with scarce resources (Yeates, 2016).
Junk journals charge APCs to publish a manuscript after a short turnaround time. Publication is preceded by a formal but superficial peer review. Short, generic and predominantly positive reports are presented to the authors. However, the manuscript is generally accepted without major changes. Junk journals are primarily profit-driven and generally lack an academic agenda. Prime examples of junk journals are many of those published by the Indian publishing house OMICS (Butler, 2013).
Fake journals do not conduct any peer review (although this may be claimed in the external presentation), but charge APCs to publish a manuscript. In 2013, the biologist John Bohannon submitted an error-ridden study on a new cancer drug he had ostensibly developed to 304 suspicious journals. A total of 157 of them either accepted the manuscript for further review or immediate publication, the latter indicating that no peer review has taken place at all (Bohannon, 2013). Junk and fake journals conduct aggressive spamming to generate manuscripts and names for editorial boards. Sometimes, such journals even design a journal website that looks very similar to that of a respectable journal-a practice referred to as "hijacking" (Lukić et al., 2014). For junk and fake journals, the alleged peer review is only a necessary and useful façade for skimming off APCs. As Harzing and Adler (2016) put it, "the primary goal of the journals does not appear to be the advancement of science and scholarly discourse, at least not in ways that conventional scholars would recognize as valid" (p. 147).

Disciplinary numbers: an overview of predatory management journals
Predatory publishing is not an uncontroversial issue. Although the volume of articles in predatory journals increased from 53,000 to 420,000 per year between 2010 and 2014 (Shen and Björk, 2015), some commentators still consider the issue of predatory journals as merely a "storm in a teacup," which does not require broader professional attention (Leininger, 2018). We therefore use this section to present some disciplinary numbers: descriptive statistics on predatory journals in the management discipline, which substantiate our call for disciplining peer review anew.
In 2010, the librarian Jeffrey Beall, a fierce critic of the deficiencies of the Open Access paradigm, began to assemble a list of potentially predatory publishers and journals based on his own Table 1. Types of Open Access journals with below-average quality.

Journal type Characteristics Orientation
Aspirant Journal Sometimes APCs, below-average peer review Primarily science-driven Junk Journal APCs, formal but superficial peer review Primarily profit-driven Fake Journal APCs, no peer review Purely profit-driven research and secondary data ("Beall's List"). After Beall ended his project under unclear circumstances in 2017, the scholarly analytics company Cabells launched a blacklist with initially 8300 academic journals that fail on basic quality criteria (Bisaccio, 2018). Working on this provocation piece, Cabells granted us access to a dataset of all journals on the blacklist that were categorized as relating to the field of "management" on 13 September 2018. A journal is added to the blacklist when it meets one of 66 blacklist criteria set by Cabells. The dataset we obtained consisted of 661 Open Access management journals (out of a total number of 7790 Open Access journals on the list). After reducing the blacklist criteria to those 28 that related directly to the quality of peer review, we were left with 639 entries, which we analyzed further. 1 From this pre-processed dataset, we first calculated the number of violations (i.e. criteria that were met) per journal (Figure 1). The average number of violations per journal is 2.49, with a median of two violations per journal. Although we found one journal with eight violations and one with seven violations (0.15%), the larger part of journals in our dataset had three (118 journals, 18.46%), two (293 journals, 37.4%), or only one violation (113 journals, 17.68%). Subsequently, we calculated the frequency of individual violations across our dataset ( Figure 2). The most frequent violations among the blacklisted management journals are a missing peer review policy on the journal's website (344), the absence of an editor or editorial board on the journal's website (301), and prominently displayed announcements of rapid publication or unusually quick peer review (255).
Our analysis shows that predatory management journals make an impressive 8.5 percent of all predatory Open Access journals on the blacklist. However, our analysis further suggests that the real number of predatory management journals might be even higher than the 639 we worked with. Predatory journals do their best to obscure their predatory nature. As the distribution of blacklist violations shows (Figure 1), only very few of them fail miserably in pretending to be a reputable outlet. The great majority only ended up on the blacklist due to one or two flaws in their façade. Of course, blacklists need to be consulted with great care. However, even if the list at hand contains a few false positives ("aspirant journals" that at least attempt to organize serious peer review), the skewed distribution suggests that there is a considerable number of false negatives ("junk" and "fake journals" that maintain the façade of a reputable peer review). Thus, services like Cabells or thinkchecksubmit.org are valuable defensive initiatives that can help academics to navigate around predatory Open Access journals. But their assessment of journals is limited to publicly available information and clues regarding credibility. In the following, we propose another solution to the problem of predatory publishing that puts reputable journals in a more offensive position by changing the rules of the peer review game.

Why we need open peer review
Ever since we started playing the "game of peer review" (Raelin, 2008), we have been debating its rules. In her essay Toward a bill of rights for manuscript submitters, Judith Clair (2015) argues that authors should be guaranteed a right to move through the review process without excessive delay, as well as a right to an evaluation based on objective criteria. As we have shown above, predatory publishing has bent the rules of peer review in a way that the right to timeliness has completely forced aside the right to an objective evaluation. What we need is an adjustment to the rules of the peer review game that balances out both rights under the new paradigm of Open Access and APCbased business models. In this section, we therefore propose open peer review (OPR) as a set practice that can curb predatory journals and at the same time increase rigor and relevance of reputable management research.
Ideas for more open forms of peer review have spawned from the critique of double-blind peer review, which developed in the natural sciences after the Second World War and was adopted by many other fields including management research over the second-half of the 20th century (Spier, 2002). In the field of management research, issues of recurrent concern include the unreliability, inconsistency, delay, unaccountability, or social biases of double-blind peer review (Osterloh and Kieser, 2015). Since the early 1990s scholars across fields have experimented with ways in which the double-blindness could be opened up in order to mediate these problems. From these experiments, first the label and then a number of definitions for OPR have emerged. Reviewing 122 of these definitions, Ross-Hellauer (2017) identified seven specific OPR practices (Table 2).
Against the backdrop of predatory publishing, we find that Ross-Hellauer's OPR practices can be grouped into two categories, each of them with their own implications for the dynamics of knowledge production. Some OPR practices provide outsiders with a vista into the peer review process. Other OPR practices modulate existing or create new ways of communicating between authors, editors, reviewers, and other interested parties. As a strategic response for reputable journals in opposition to predatory journals, practices of transparent peer review seem particularly adept. To foster developmental reviewing and hence to improve rigor and relevance of management scholarship, practices of dialogical peer review seem fruitful.

Transparent peer review: curbing predatory journals
Transparent peer review practices can be a means for reputable journals to differentiate themselves from predatory journals. When reputable journals decide to make visible the laborious work of authors, reviewers, and editors, predatory journals will not be able to match these efforts. It thus becomes easier to identify and de-legitimize journals that lack proper peer review. To figure out which form of visibility can best be introduced to their community, we propose that journals can experiment with open identities, open reports, and some types of open platforms.
Open identities is an alternative to the prevailing anonymity in the peer review process. In most fields, peer review is organized as a single-blind or double-blind process. To our knowledge, most management journals follow a double-blind policy, where reviewers and authors do not know the others' identities, but where editors are known to all parties. When practicing open identities in peer review, authors and reviewers know each other's identities. Once an article is published, it would not only indicate the name of its authors but also of the reviewers that had commented on the manuscripts. An alternative would be to disclose not the name of the reviewer and author, but the name of the reviewer's institution, department, or working group. We assume that fake journals will not be willing to disclose the identities of their reviewers, as this would reveal their fictionality, unsuitability, or overload. When the practice of open identities further requires naming an institutional website or an ORCID ID, faking reviewer identities can be rendered discouragingly "costly" for predatory publishers. As an additional effect to the demarcation of predatory journals, open identities can help marginalized scholars to mobilize and push for greater representation in the reviewer pools of reputable journals. In the field of management research, it is already good practice for many established journals to publish a list of participating reviewers at the end of a year. We therefore believe that the step toward open identities seems smaller than in the past. Also, although management journals still maintain the façade of double-blind review, the growing specialization of subfields and the fact that many titles of conference presentations (e.g. the Proceedings of the Annual Meeting of the Academy of Management) and working papers are published on the Internet (e.g. on SocArXiv), reviewers can often infer the author of a manuscript they are reviewing anyway, while authors have little chance to infer who is reviewing their papers.
Open reports mean that full reviews or summaries are published alongside the final journal article. While this might be difficult for print versions of journal articles, it seems unproblematic to publish them online. Many journals in the field of management already invite authors to enhance their accepted manuscript with comprehensive online appendixes and multimedia content (e.g. Academy of Management Discoveries). We can assume that the more reputable journals publish reviews alongside full articles, the more pressure will be exerted on predatory journals, as showing their overly shallow reviews or admitting the nonexistence of those might not only discredit them but also have legal consequences. To mitigate some of the threats of predatory publishing (e.g. sciencewashing) through open reports, journalists and practitioners would need to internalize open reports as an indicator of trustworthiness. Open reviews could be combined with open identities for published articles. This way, the quality of a review could directly enhance the reputation of a reviewer. However, to fight off predatory journals, open review would also be effective when the identities of reviewers remain undisclosed. In management research, it is already common practice for research groups to discuss reviews that their members have received and need to respond to, for example, in PhD seminars on academic writing or at workshops of local research networks like Organisation Theory Research Group (OTREG) in the United Kingdom.
Open platforms mean that the peer review is facilitated by an organizational entity other than the journal in which an article is to be published. In this case, authors submit their articles to independent platforms, which organize the review process. Journals then receive these reviews from the platform instead of soliciting them on their own. The link between journal and platform can be organized in different ways. On the one hand, it is possible for platforms to have partner journals that can browse the reviews and make publication offers. On the other hand, it is possible for the platform to forward the reviews to the author's preferred journal. With regard to predatory journals, we assume that a larger part of APCs in this model will be shifted to the review platforms, making the academic publishing market less attractive for predatory journals. Furthermore, we imagine that partnering with an independent platform can become a quality label for academic journals, shifting signaling power from the mere label of peer review toward the reputation of such platforms. To avoid that platforms simply reproduce power structures from the field of reputable journals (e.g. composition of reviewer pools), their funding should be independent from such journals and scholarly associations. For example, OPR platforms could follow the model of the Open Library of Humanities, which is funded through grants and a membership model for libraries and other research institutions. In the field of management, independent platforms are used increasingly to incentivize reviewers, by turning reviews into a measurable research output. The platform Publons, already collects and verifies information on reviews and reviewers. Reviewer profiles can then be added to a CV and included as a criterion in formal selection and tenure processes.

Dialogical peer review: fostering rigor and relevance
So far, calls for more developmental and less punitive reviewing have focused on the need for individual "skills, roles, and techniques" (Ragins, 2018: 159). We think that dialogical peer review practices are an organizational means to foster developmental reviews and hence to create better and more interesting research articles. Open participation allows the wider academic community and other interested parties to contribute to the review process. These reviewers can contribute either full, structured reviews or shorter comments that complement rather than replace formal, invited review reports. Commenting can either be open to anyone and without the need to provide name, or require verification such as a minimum number of published articles and a login with one's full name (e.g. through an ORCID ID). Research on open participation has shown that such practices, however, require some form of closure to avoid forms of "exclusionary openness" (Dobusch et al., 2019). To avoid "trolling" and to realize the positive potential of open participation in peer review, this practice would require editors or journal administrators to moderate incoming comments and reviews. Open participation has been described as an OPR practice that is particularly suited to social sciences, as reviewing here puts great emphasis on "originality, creativity, depth and cogency of argument, and the ability to develop and communicate new connections across and additions to existing texts and ideas" (Ross-Hellauer, 2017). Since this description holds true for a large part of management research, we think that open participation could significantly enhance the quality and relevance of scholarship through broader dialogue (Bornmann et al., 2012;Fitzpatrick and Santo, 2012). Furthermore, management education has always been interested in strong and vivid links to practitioners. Many papers in management research-also in high-reputation journals-are co-authored by theoretically inclined practitioners. Why should these community members be excluded from peer review?
Open interaction means that direct reciprocal discussion between reviewers, authors, and editors is encouraged. One way of interacting more openly is that reviewers can comment on each other's reviews before they get sent to the author(s). Alternatively, reviewers could be required to come to a unilateral decision, based on which the editor compiles a single peer review letter. This variant addresses the problems many authors face when receiving contradictory reviews of their manuscripts and has already been implemented by the journal eLife (Schekman et al., 2013). In its arguably most open variant, authors, reviewers, and one or more of the editors would come together in an interactive collaboration stage of the peer review process. Such a model has been tested by the publisher Frontiers (2016). In many management journals, it is already the case that authors can reach out to editors for soft guidance on how to approach issues in their reviews, or how to handle contradictory demands from the reviewers. Open interaction could formalize these informal processes, making them more accessible for all members of the peer review system, especially for junior scholars who might be insecure about whether contacting the editor is appropriate or not.
Open pre-review manuscripts means that authors make their manuscripts immediately accessible via the Internet, either in advance, or in synchrony with formal peer review procedures. Many preprint servers (SSRN or SocArXiv for management research), institutional repositories (LSE Research Online), catch-all repositories (Zenodo, Figshare) and some publisher-hosted repositories (PeerJ Preprints) allow authors to immediately make their work available. Open prereview is a practice that is complementary to the traditional review process, while effectively turning double-blind into single-blind reviewing. By publishing their manuscripts online, the process of developing a final article can become more developmental, as scholars can collect broader feedback on their unpublished manuscript and use this feedback when redrafting the paper as part of the formal review process. Open pre-review manuscripts could be a research practice independent of the formal review process; however, journals or reviewers could make the publication of open pre-review manuscripts a formal requirement (Kriegeskorte, 2012;Kriegeskorte et al., 2012).
Open final-version commenting, finally, invites scholars and members of the broader public to review or comment on the final "version of record" publication. The Internet has fostered many new ways to communicate and provide feedback on research output. Today, many journals offer commenting sections on published articles, although they are not heavily used (Walker and Rocha da Silva, 2015). However, what is very popular are academic social networks (Mendeley, ResearchGate, and Academia), Twitter and personal or institutional research blogs. In the field of management research, open final-version commenting could be a very helpful instrument to improve and develop review articles, for example, those published by the Academy of Management Annals. Through open final-version commenting, the authors could be pointed toward new and relevant publications that would amend certain sections of the review (using tools like the annotation software hypothes.is). Changes in the article would be noted in a "log file" at the end or beginning of the article indicating the date and nature of the change. In this way, the relevance of comprehensive reviews could be increased, resources for new reviews could be saved, and entirely new reviews would only become necessary when taking a radically different angle on a certain body of knowledge. An established platform to allow for post-publication commenting is Pubpeer.
Changing some rules of knowledge production-for the better Management research as an academic field is constantly concerned with the necessity and possibility of meaningful engagement with practitioners and forms of knowledge that originate in praxis . While many academics describe agonistic encounters with practitioner-outsiders as an opportunity for learning, the situation with predatory journals is different. Predatory journals challenge the establish regime of academic knowledge production from the inside. As we have shown above, predatory journals unveil a major deficiency of the peer review game: it is the façade, not the process that legitimates its outcome.
Kathy Dean and Jeanie Forray (2018), coeditors of the Journal of Management Education, have recently argued that "over a combined 15 years of experience as action editors and editors-in-chief, [they've] come to believe that peer review as it currently exists is unsustainable, and that this reality threatens the future of all academic scholarship" (p. 164). We agree with them and believe that the rise of predatory publishing should be a trigger to experiment with more open forms of peer review. OPR practice can not only curb predatory journals but also can lead to more rigorous (through dialogue within the academic community) and relevant (through dialogue with other interested parties) management research. We call upon our discipline to experiment with some of the practices outlined above rather than prematurely converging on any of them. A major reason for this is that OPR is not without risk. While there is debate about the general advantages and disadvantages of each of the individual OPR practices (Ross-Hellauer, 2017), we see two challenges that are more specific to the management discipline.
On the one hand, OPR might evoke criticism on research methods from voices that have previously not been raised. We assume that some methods despite being made transparent, remain incomprehensible to audiences outside thematic academic communities. Authors, reviewers, and editors together will then need to meet accusations of pseudo-science with explanations why some research questions demand methodologies that are less congruent with the public image of science than others. On the other hand, predatory journals could continue to exist despite serious journals adopting OPR practices. We should not assume that the adoption of OPR practices magically eradicate predatory journals. Rather, we understand OPR as a resource that journals, professional academic associations, and funding institutions can use to de-legitimize predatory journals. While de-legitimation is already at work through tools such as blacklists, we think that OPR provides an even better tool because it does not involve the risk of erroneously classifying serious journals as predatory ones.
Calling out fake peer review, however, needs successful advocacy for OPR practices first. Different roles in the review process allow for different forms of advocacy work. Editors and editorial board members are in a favorable position to advocate for OPR, as they are the ones who have the authority to set the rules of the peer review game. In the well-documented case of the subscription-based linguistics journal Lingua, the entire editorial board resigned simultaneously just to collectively launch the new Open Access journal Glossa. Although we do not see OPR as a problematic issue for academic publishers, editors can leverage such stories when facing resistance from publishers. For more traditional journals, it seems unlikely that the entire peer review will be radically opened without piloting projects. We therefore recommend editors to advocate for experiments with openness in some supplementary section of the journal (e.g. essay or dialogue section). Reviewers are in a favorable position to advocate for OPR as well, especially when the work in a field that is relatively scarce in senior experts, but highly attractive to editors. In these cases, reviewers can make their willingness to review dependent on the condition that review reports and/ or original manuscript are made openly available, as described above in the case of Nikolaus Kriegeskorte. At least, reviewers can individually set an example by publishing reviews on OPR platforms such as Publons.
On the first view, authors seem to be in an unfavorable position to advocate for OPR. Indeed, especially untenured early career scholars oftentimes feel unable to submit to any other than the few high-impact legacy journals in the field of management research. In the face of ever-increasing submission numbers for the leading management journals, requests for OPR might just result in an instant rejection. However, we think that researchers can advocate for more open forms of peer review in at least two ways. On the one hand, they can serve as ad hoc reviewers for journals that publish management research and already experiment with greater openness, such as Business Research or Ephemera. On the other hand, they can support candidates who run for positions in our professional bodies and who have expressed an interest in greater openness in academic publishing. As formal representatives in these bodies, these candidates can substantially shape the course of the associated academic journals (e.g. Organization Studies, Academy of Management Journal).