The oracles of science: On grant peer review and competitive funding

From a purely epistemological point of view, evaluating and predicting the future success of new research projects is often considered very difficult. Is it possible to forecast important findings and breakthrough in science, and if not, then what is the point trying to do it anyway? Still, that is what funding agencies all over the world expect their reviewers to do, but a number of previous studies has shown that this form of evaluation of innovation, promise and future impact are a fundamentally uncertain and arbitrary practice. This is the context that I will discuss in the present essay, and I will claim that there is a deeply irrational element embedded in today’s heavy reliance on experts to screen, rank and select among the increasing numbers of good research projects, because they can, in principal, never discern the true potential behind the written proposals. Hence, I think it is motivated to see grant peer review as an ‘oracle of science’. My overall focus will be on the limits of competitive funding and also that the writing and reviewing of proposals is a waste of researchers’ precious time. And I will propose that we really need to develop new ways of thinking about how we organize research and distribute opportunities within academia.

Mots-clés évaluation de la recherche, évaluation ex ante, évaluation par les pairs, financement compétitif, réflexion scientifique Nothing is more inimical to the progress of science than the belief that we know what we do not yet know.
-Georg Christoph Lichtenberg ([1789Lichtenberg ([ -1793Lichtenberg ([ ] 2000 In the present essay, I argue that the ambition of using grant peer review to identify and predict future innovation, promise and general impact in research proposals must be questioned. There is nonetheless a common need for new ways of thinking about how to organize a more rational and equitable distribution of opportunities in academia. The problematic consequences of the excessive use of quantitative metrics and performance indicators have already been thoroughly examined by many scholars, including Olof Hallonsten (2021). Following David Jaclin and Peter Wagner's (2021) recent call for a debate about the increasing evaluation of science, my intention here is to focus on an issue that goes directly to the very core of scientific creativity and scholarly communication. My focus will be on one inescapable fact, namely that every organized effort to evaluate and rank new research projects is affected by a fundamental epistemological uncertainty and ambivalence. In other words, the ex ante ('before the event') assessment of project descriptions is deeply dependent on the professional 'gut feelings' of individual reviewers and their ability to guess wisely. In this respect, they embody the role of academic soothsayers, as they try to foretell the possible future of different competing research ideas. One of the main reasons why I also find it justified to view the academic institution of peer review as a peculiar kind of oracle, is that the reviewers collectively fulfill their own prophesies by supporting certain researchers who then most likely are going to be more productive (and successful?), compared with those who do not get funding. In my view, there is no point in trying to convince ourselves that evaluation of grant proposals is a rational, fair and reliable practice; there is ample empirical evidence showing exactly the opposite (e.g. Jerrim and De Vries, 2020;Pier et al., 2018;Van den Besselaar et al., 2018).
In several academic contexts, the process of peer review may nonetheless offer an important source of intellectual exchange between researchers -an ongoing 'organized skepticism' is taking place, for example, during seminars, conferences and workshops, and in journal peer review (cf. Brezis and Birukou, 2020;Fyfe et al., 2020;Teplitskiy et al., 2018). However, when it comes to the allocation of research money, peer review has not worked very well over the last couple of decades. The competitive funding system that dominates in most OECD countries today is highly inefficient, time-wasting, unreliable and demoralizing. Even for the most privileged scientific elite, the hunt for external grants requires that they spend a great amount of their precious time preparing grant applications. But for the majority of researchers, formulating a 'fundable' project will often lead to disappointment and frustration. Not always, of course, because the 'luck of the reviewer draw' can also work in favor of the less privileged. Still, for a large proportion of researchers with flexible contracts, researchers who do not belong to the research environments where funding clusters in large programs, the funding issue will eventually be a question of professional survival and personal integrity. What kind of epistemological risks are a young and talented researcher prepared to take, and for how long? Without any reasonable funding or a stable academic position, they will, sooner or later, have to make a hard decision. And many original and idiosyncratic scholars who do not fit into the 'entrepreneurial culture', and have difficulty learning the art of 'grantsmanship' (Roumbanis, 2019a), will often struggle in vain.
With around 85-90 percent of submitted proposals being rejected, despite being of high quality, the whole system loses its original legitimacy as a basic quality control. A large amount of evidence has demonstrated that the evaluation of proposals is a rather arbitrary and biased practice. And yet, instead of admitting the limits of reason, most professionals who promote the current funding regime and its evaluation processes adhere to what Jon Elster (1989) called 'the rituals of reason'. It is difficult not to see a strong impulse of irrationality at the very heart of the way contemporary science is organized. In fact, the whole situation fits rather well within Max Weber's pessimistic analysis of the rationalization processes that spread to every sphere of human life, with the support from scientific reasoning and bureaucratic administration as key components. Today, the academic community has managed to create its own unique kind of 'iron cage' by institutionalizing the old collegial system of peer review for the purpose of distributing scarce resources. But this has distorted the original meaning of scholars trying to scrutinize each other's work. What has been a central dimension among philosophers and scientists since the ancient Greeks, with their characteristic intellectual disposition for critical dialogue and radical quest for the truth, has slowly transformed into a routinized, strategic, and highly pragmatic 'evaluation culture'. Evaluating the future impact of new research projects is, needless to say, a fundamentally uncertain endeavor: to predict the next scientific breakthrough is extremely difficult, if not impossible. This is probably the reason why John Ziman (1983) considered grant peer review to be 'a higher form of nonsense'. Still, national research councils and private funding agencies are doing their very best to mobilize committed experts to select what they believe to be the most excellent and innovative proposals that hopefully can generate important new knowledge.
I think that it is time to return to the essential tension between creative thinking and scholarly communication. The content of a proposal must have something that attracts the reviewers intellectually, otherwise they will not have a chance in this fierce competition. Exchange of meaning that concerns research is a complex issue. As with everyday language, scientific concepts can sometimes be ambiguous and vague. It has been argued that ambiguity, and the uncertainty that follows from ambiguity, can sometimes play a distinct role in the creative development of science by creating zones of intellectual engagement (McMahan and Evans, 2018). However, when it comes to crucial decisionmaking situations, the differences in how scholars communicate and interpret new research ideas can have rather big consequences. Even if one or two reviewers in a panel group think that a proposal is totally brilliant, a third reviewer might disagree strongly, which might be disastrous for the applicant as it lowers the proposal's final ranking. Writing a proposal is not easy, to be sure, and it fosters certain styles of reasoning and a peculiar 'academic genre'. But thinking creatively can be immensely difficult, especially if one is dealing with complicated problems or if one is trying to come up with original new theories. Using novel methodologies or exploring unorthodox techniques also comes with uncertainties concerning feasibility. How do you address your own vision in a convincing and enthusiastic way? In practice, it can take many years to find out what you really are trying to understand and several unsuccessful efforts to make any progress, and still, behind all the written words exist hidden potentials that belong to an unknown future.
But even in retrospect, how can we possibly explain all the relevant details and different stages in our thinking processes? Albert Einstein (1982: 45) was rather clear about this difficulty when he was invited to give a talk about his theoretical journey: 'It is not easy to talk about how I reached the idea of the theory of relativity; there were so many hidden complexities to motivate my thought, and the impact of each thought was different at different stages in the development of the idea.' Just try to imagine how parts of these complexities should be described in a grant proposal before the actual discovery. That seems to be the real crux of the matter; one cannot formulate what one does not know at the moment of initiation; that is practically impossible. Einstein's intellectual testimony tells us something fundamental about the partly opaque nature of research itself, as well as the playful side of reason. Deep thinking is never a completely linear process -it is also cyclical, it may contain intuitive leaps, and lead to blind alleys. And this holds for all sorts of basic research and intellectual work; only safe projects follow predetermined pathways.
I will conclude this essay by highlighting something that sociologists of science and science and technology studies (STS) scholars have been aware of for quite some time, namely that experts within a certain research field rely on their developed sense of taste (Merton, 1973) or 'skilled academic intuition'. This vital cognitive sensitivity is embedded in the analytical and deliberative reasoning, and as Steven Shapin (2012: 178) wrote: 'Connoisseurship and scientific judgment aren't usually considered together, but they should be.' However, this is a double-edged sword, because it also underscores the striking ambivalence that is often manifested by virtue of how peer review usually is organized today all around the world. The use of panel groups with expert reviewers that evaluate and score different subsets of proposals, and are then eventually forced to reach consensus, puts the spotlight on the individual sensitivity or 'gut feelings'. Because, who is right or wrong, when conflicting judgments about the quality and promise of a proposal is to be decided during negotiations under time pressure? Who has the best sense of taste to predict the future success among many competing new ideas? This is what makes grant peer review look like an oracle of science. One can always argue in retrospect that the projects that were supported indeed were productive in terms of published articles, citation indexes, and in getting more funding. Still, it leaves the question open as to whether some of the rejected projects would have generated even more important outcomes, because it is impossible to measure this kind of contrafactual dimension accurately.
There is a good reason to stop producing and evaluating the unsustainable growth of research proposals altogether, and instead try to find new creative solutions to the question of how research should be supported in the future. For example, if all public funding went directly into university departments and research centers, without extensive grant writing and peer review processes, then resources could be distributed locally and less time would be wasted. Other criteria that promote a more dynamic system could be tested and carefully established. A wise and experienced scholar like Aaron Sloman (2014) has, in my view, presented a rather compelling sketch for how the allocation of block funding could be done in practice. I will not go into all of the details here, but one thing he really underscores is the importance of giving all junior researchers very good support at the early stages of their careers, because that is a more just way to motivate future allocation of resources and making hiring decisions. That would be the best way for every young researcher to prove themselves and to show their independence as creative thinkers. The opportunities to continue doing research could later on be based on previous merit among the employed and already credible researchers/research groups. However, the continuous allocation of opportunities should not only depend on the numbers of publications, citation indexes and previous rewards, because these are to a large extent pointless (see Hallonsten, 2021). What is important to support, according to Sloman, should instead be based on other academic values, in order to encourage scholars to investigate hard and deep scientific problems, 'more open-ended, less predictable research projects'. Furthermore, such research qualities, as working on strange and original ideas, giving exciting presentations and taking epistemic risks, should be seriously taken into account and rewarded.
The local distribution of resources at the university department could also depend on the impartiality of chance: 'A few times each year [. . .] a limited amount of funding [. . .] will be allocated on the basis of a lottery' (Sloman, 2014). The use of a sensibly designed lottery system would be an excellent alternative to peer review; it would be cheaper and less wasteful for the academic communities. In fact, the benefits of a lottery would not only be that it saves time and resources, but also that it contributes to a more dynamic selection process and increases the epistemic diversity, fairness, and impartiality within academia (Roumbanis, 2019b). A lottery could be designed and organized in many different ways, and be used in various situations and combined with other selection criteria. I think there could be a plethora of novel methods for organizing opportunities and managing time in academia, including teaching duties, administration, and research activities. My suggestion is that we should stop the destructive funding competitions that in many respects shape the very conditions of science today. And finally, breaking with the dominant funding regime also entails that we stop spending time writing proposals and relying so excessively on the oracular power of peer judgments for predicting the fate of new research ideas.

Funding
The author received no financial support for the research, authorship, and/or publication of this article.