In a front-page article in yesterday’s New York Times, Gina Kolata argues that the system of awarding grants for cancer research unduly favors research projects that make incremental advances over projects that have a smaller probability of achieving more fundamental breakthroughs by challenging established dogmas. This particular problem is part of a broader problem: Decisions on grant funding are not made using cost-benefit analysis or any systematic methodology for assessing which projects are the most promising. And that in turn is part of a broader problem still: Granting agencies don’t have much incentive for identifying the procedures that are likely to lead to the socially best allocation of research dollars.
At a bare minimum, grant-granting institutions ought to generate probability distributions of different levels of benefit for alternative proposed projects. I suspect that resistance to such an approach stems from recognition that any subjective estimates of both probabilities and benefits are likely to be somewhat arbitrary. How can even an expert scientist know that there is either a 1% or a 5% chance that an experiment testing an unorthodox claim will be successful? And that difficulty pales in comparison to the challenge of assessing the benefits of experiments. We might be able to estimate the benefits of a cure for cancer, by estimating the effects of a cure on quality-adjusted life years, but it is difficult to assess how far toward that goal any particular successful experiment will bring us. The task is made still more complicated by the fact that some experiments will be valuable not because they confirm either the experimenters’ or skeptics’ views, but because they produce some entirely serendipitous discoveries.
My view is that grant decisions will be better if we force scientists making assessments to give their best subjective estimates, ultimately producing a probability distribution of different possible benefit levels, even if such numbers are inherently subjective. It seems unlikely that intuitive decisionmaking will produce better results than more rigorous approaches. Scientists may worry that quantification would discourage investments in basic research relative to more applied research. The reverse seems likely to be true. The more foundational the research, the greater the potential benefits to which it may contribute, and this factor seems likely to outweigh the fact that any single highly theoretical experiment may provide only a small bit of progress. Whether I’m right or wrong about this, allocation decisions ideally should be based on rigorous analysis of this question, or at least on moderately developed back-of-the-envelope calculations, rather than on pure intuition.
One objection is that any system that the government or indeed any bureaucracy develops for making more mathematically rigorous assessment of grants may be flawed by ignoring important criteria that scientists may take into account implicitly. But it need not be government that is charged with making these estimates. An alternative to the grant system would flip government’s role to ex post evaluation of benefits and costs. Twenty-five years from now, it should be much easier for scientists to assess the relative benefit of experiments conducted today. Instead of grants, the government could place grant money into a prize fund, let it accumulate interest, and distribute the money later. This approach would give private parties, akin to venture capitalists, incentives to anticipate the benefits of research. At the least, such parties should be less risk averse than the grant agencies that Kolata describes.
This may seem too radical a change from our existing system of scientific funding. But it is possible to integrate a modest version of this system within the existing grant system. For example, we might set aside just 10% of current grant money for a prize fund. Private parties would be required to auction their rights to any prize to independent third parties, conditional on the grants being approved. The grant agency might then consider the results of the auctions, in addition to any information they ordinarily would consider. At the least, this could help provide the grantors cover for approving low-probability, high-benefit projects. Moreover, the practices of the third parties-What kind of models do they use? What kind of disclosure do they expect from grant applicants?-might help us identify how we could improve the government’s own procedures. Whether or not the auction participants do a better job than the government (and with relatively small stakes, they might not), the types of projects they select with their own money on the line could help inform the government about what its decisionmakers’ biases might be.