Zentrum für interdisziplinäre Forschung
More than Luck: Rethinking Research Funding
by Rima-Maria Rahal (Wien/Bonn)
In July, researchers from six countries and disciplines spanning the range between law, statistics, neuroscience and bibliometrics came together with funders from institutions such as the German Research Foundation (DFG) and VolkswagenStiftung to talk about how research funding works - and how it should work. (Website of the workshop)
In Germany, nearly half of all research funding comes from external grants. Securing this funding, however, is often a time-consuming and competitive process that many criticize as inefficient and ill-suited to identifying the most innovative ideas. In the sweltering Bielefeld summer heat, during the workshop “More than Luck: Rethinking Research Funding,” at ZiF, there were lots of provocative questions on the table. How can research funding processes become fairer, and more effective?
The Problem with the Status Quo
Current funding mechanisms range from rigorous peer review and invitation-only calls to randomized lotteries. Funding formats also differ: some offer one-time grants, while others provide long-term support. Each system has its strengths and weaknesses – but how well do we really understand them?
The design of the funding process influences what type of research is pursued – and by whom. Yet, the criteria for recognizing a strong research idea are often vague or inconsistently applied. This lack of clarity is symptomatic of the need for open, critical discussions about which funding mechanisms are effective, which are not, and how they might be improved.
One of the core criticisms addresses the assessment of research quality: How can we predict whether a proposal has the potential to be turned into excellent research? And how do we define what constitutes excellent research? Traditional selection criteria like publication counts or citation metrics fall short of adequately capturing these concepts. At the same time, these metrics may also disadvantage early-career researchers or individuals from underrepresented or marginalized groups.
The stakes are high: research funding involves billions of euros, years of researchers’ time, and directly impacts the quality of scientific progress. Creating conditions that allow the most promising ideas to flourish, regardless of the applicant’s resources is essential for a fair and effective research landscape.
New Ideas, New Tools
Based on the critical assessment of the funding landscape, the workshop also dove into concrete alternatives. First attempts to include lotteries – whether as a means to reduce a large number of eligible people to a smaller number of applicants, or to select winning projects among submissions rated highly in a prior assessment process – suggest that selection procedures could benefit from introducing randomness in carefully designed ways. Without detectable losses in quality, but an enormous potential to reduce costs and boost diversity of ideas and applicants, we may yet see more work on integrating lotteries into the funding process.
The workshop participants also discussed whether hot-topic technologies such as Artificial Intelligence and large language models could play a role in improving funding decisions. If AI could assist in evaluating applications, the burden on reviewers could be decreased considerably. Preliminary results dampened optimism about the cost-saving potential of automated reviews: The machines tend to like projects better than humans, and would recommend many projects that human reviewers do not. Moreover, ethical issues with feeding algorithms, maintaining biases included in historical data used to train the algorithms, and using energy intense computations were discussed.
Following the workshop, some of the participants are now collaborating on a contribution to synthesize their discussions and key insights. The joint goal is to critically examine current research funding practices, to map insights from existing research onto challenges identified, and to propose further research to develop evidence-based alternatives.
Participants of the Workshop. Photo: Universität Bielefeld/ZiF