Key Performance Indicators: Power as Knowledge

There is an excellent review essay by Simon Head on the future of British universities in the NYRB. It discusses the Strategic Plan of the Higher Education Funding Council for England (HEFCE), including the “Research Assessment Exercise (RAE) led every six or seven years.” As of 2008, panels of 10 to 20 specialists in 67 fields evaluate work during RAEs. As the author explains,

The panels must award each submitted work one of four grades, ranging from 4*, the top grade, for work whose “quality is world leading in terms of originality, significance and rigor,” to the humble 1*, “recognized nationally in terms of originality, significance, and rigour.” The anthropologist John Davis . . . has written of exercises such as the RAE that their “rituals are shallow because they do not penetrate to the core.”

I have yet to meet anyone who seriously believes that the RAE panels—underpaid, under pressure of time, and needing to sift through thousands of scholarly works—can possibly do justice to the tiny minority of work that really is “world leading in terms of originality, significance and rigour.” But to expect the panels to do this is to miss the point of the RAE. Its roots are in the corporate, not the academic, world. It is really a “quality control” exercise imposed on academics by politicians; and the RAE grades are simply the raw material for Key Performance Indicators [KPIs], which politicians and bureaucrats can then manipulate in order to show that academics are (or are not) providing value for taxpayers’ money.

Imagine “needing to sift through thousands of scholarly works” in short order; what a bizarre process. There are many critics of RAE; this essay is particularly worth reading because it connects the dots between corporate-speak and the new academic order:

Of all the management practices that have become central in US business schools and consulting firms in the past twenty years—among them are “Business Process Reengineering,” “Total Quality Management,” “Benchmarking,” and “Management by Objectives”—the one that has had the greatest impact on British academic life is among the most obscure, the “Balanced Scorecard” (BSC). On the seventy-fifth anniversary of the Harvard Business Review in 1997, its editors judged the BSC to be among the most influential management concepts of the journal’s lifetime. . . .

[T]he methodologies of the Balanced Scorecard focus heavily on the setting up, targeting, and measurement of statistical Key Performance Indicators (KPIs). Kaplan and Norton’s central insight has been that with the IT revolution and the coming of networked computer systems, it is now possible to expand the number and variety of KPIs well beyond the traditional corporate concern with quarterly financial indicators such as gross revenues, net profits, and return on investment. . . . Writing in January 2010, the British biochemist John Allen of the University of London told of how “I have had to learn a new and strange vocabulary of ‘performance indicators,’ ‘metrics,’ ‘indicators of esteem,’ ‘units of assessment,’ ‘impact’ and ‘impact factors.’”

Head notes that the “academic control regime with its KPIs will continue to apply as much to philosophy, ancient Greek, and Chinese history as it does to physics, chemistry, and academic medicine.” It’s easy to project the types of biases the system will create: don’t offend people who might be on the selection committee; focus work on what they can recognize as “world leading” (one wonders how many languages the assessors know); and, of course, avoid writing long-form books to instead concentrate on high-impact journal articles that any panelist can recognize as an advance in the field.

An Australian business school professor, Dennis Tourish, has criticized similar efforts in his country. In business schools, “world leading” appears to be what plays well in America:

The most lauded journals are based in the US, since this is the biggest market for management education and has first leader advantage. These reflect the positivist and functionalist orthodoxy that dominates the discipline there. They pay relatively little attention to such problems in management theory and practice as exploitative working conditions, race or ethics. Non-US academics who wish to publish in such outlets – and few succeed – overwhelmingly have to adapt to their norms, practices and theoretical priorities to do so. . . . [emphasis added]

Elite journals also have a rejection rate, typically, of over 90 per cent. For some reason, this is used to justify the quality of a journal. It, therefore, becomes a target for others, convinced that competitive advantage can be obtained by copying the behaviours of their rivals. To achieve this, desk rejection is increasingly common. Editors have become judge, jury and, mostly, executioner. So much for the safeguards of peer review. If you dodge the bullet of desk rejection an arduous obstacle course remains.

One British academic complains that “whether my article is any good, or advances scholarship in the field, are quickly becoming secondary issues.”

Head provocatively asks “Might the scale of the global financial crisis, driven by the targeting mania of the Balanced Scorecard and by automated management systems, shake the confidence of those who think that these very same methods should be applied throughout to the academy?” It’s a great question, for, as Amar Bhide has argued in A Call for Judgment, the “balanced scorecard” approach has wrought havoc in finance as decisionmakers increasingly distant from actual borrowers have set in place manipulable numerical standards that did little more than increase the volume of transactions (and, thereby, high-ranked managers’ bonuses). RAEs will drive a similar boom in back-scratching citation networks and point-scoring articles. But no matter how flawed a business method or personality may be, they seem to have an irresistible lure to academic managerialists: witness the ascension of Lord Browne to lead a British education review after his smashing job at BP.

There will always be a tension between the autonomy of the academic enterprise and the need to subject it to the demands of markets and states for measurable benchmarks of productivity and efficiency. But the recent British & Australian efforts to rationalize the research enterprise risk turning society into a monoculture, where everyone is striving for more points (be they measured in money, esteem, or power). It reminds me of Hobbes’s Leviathan, where he observes, of “A Restlesse Desire Of Power:”

I put for a generall inclination of all mankind, a perpetuall and restlesse desire of Power after power, that ceaseth onely in Death. And the cause of this, is not alwayes that a man hopes for a more intensive delight, than he has already attained to; or that he cannot be content with a moderate power: but because he cannot assure the power and means to live well, which he hath present, without the acquisition of more.

The assessment systems are ever-unstable rankings, not stable scores; one has to keep playing the game or get left behind. The bureaucratization of research “excellence” is becoming as much a power as a knowledge game: those who define what counts as KPIs can, in turn, define contributions to knowledge. Perhaps this process always went on, sotto voce, as individual scholars in individual studies decided what research programs to pursue, and what to let die; whom to cite, and whom to ignore. That old, quiet process had many biases and problems of its own. But it now appears charmingly decentralized and humane in comparison with the assembly line of assessment mobilized by RAEs and journal rankings.

You may also like...