The antidote to cranial rectal insertion in the academic world
On twitter I recently came across this petition against the University of Warwick.
Apparently the University of Warwick is “undertaking a series of redundancy exercises predicated on the notion that successful grant applications are the key measure of performance and value for research staff”. If this is true (I’ve not found any corroboration of this yet, but please comment below if you can provide some more info), it shows both a remarkable level of stupidity by the staff responsible, and offers a dangerous precedent that other naive managers might follow.
They are being stupid because if they were to have looked at the literature on the subject, or asked people who have experience of writing or reviewing grants, they would discover that grant success is not a measure of the quality (or “value”) of an academic, and it is a measure of “performance” only in the same sense as winning the lottery is a measure of one’s ability to predict the future. For some reason the academic community (in the UK at least) has created a mythology around our system of grant selection so that we believe it is fundamentally meritocratic — that it is the best ideas that get funded, and they are selected by an “expert panel” (another myth I discuss elsewhere).
In fact, most grant success is random. If not random, then it is at least correlated to other factors not associated with the quality of the proposed research, such as previous grant success or the track record of the team (“nothing succeeds like success” after all). This is not to say that poor quality research gets funded, but rather that competition is now so great that all the applications being considered are good quality. We convince ourselves that the system works because we see people who we know to be real leaders in their field get funded, and we see some people we consider to be rather weak who do not get funded. Reassuringly, when the UK research councils did a test in which they got different panels to rank the same grant applications, most of those ranked at the very top or at the very bottom were consistent across the panels. But for the majority that were ranked in the middle, where the funding line is usually drawn… no consistency…. essentially random.
What’s the danger of such a system? Well, the only danger is if you maintain a belief that the grant selection system is purely meritocratic — then you naturally see someone’s grant success as a metric of their quality as a researcher. When they win grants, you promote them. When they fail to win grants, you berate them. Worse, we judge their “value” based on this system, despite it being mostly random. In a lottery system, some people will win often, but most will lose, simply by chance (see my WinMoreGrants blog for a numeric example). Not surprisingly, senior staff don’t like this argument because they achieved their senior status through winning grants. It is a self-perpetuating mythology. No senior academic would like to admit that their grant success is simply because they were lucky.
What seems to be happening at Warwick looks like something I have seen before. The senior academic-managers are unwilling, or unable, to make a judgement call on the overall contribution and quality of staff (possibly for fear of legal action), so instead they apply metrics that absolve them of any responsibility in the decision making. “Look, it wasn’t my decision”, will be what the academic-managers can say, “It was just what the numbers say.” It is an easy, but cowardly, route, in my opinion. Better to look at someone’s research outputs, their quality of teaching, receive opinion from others, and look holistically at their contribution. It is a difficult task, but it is a more appropriate approach if redundancies ever need to be made (which should always, always be as a last resort, anyway, of course).
Let’s hope Warwick is not a sign of a growing trend of stupidity.