The antidote to cranial rectal insertion in the academic world
There are 10 types of people. Those who understand binary and those that don’t.
It’s an old one, but it’s worth retelling. 🙂
There are also two types of academic. Those who like metrics and those that don’t. The former group think academic metrics are important quality indicators that allow objective comparison between individuals and groups. This group also likes to emphasise the importance of “excellence”, yet are rarely able to define it when asked. When they are able to define it, it is usually with reference back to the metrics. They are not afraid of “impact” in REF, because they simply see it as another metric.
The second group reject all academic-related metrics as a matter of principle, and consider quality to be something that is beyond definition and certainly un-quantifiable. They positively object to the use of platitudinal statements such as “excellence” or “world leading”. They typically reject the impact agenda wholeheartedly, perhaps for fear that it exposes their lack of capacity to explain their own value in straightforward terms (they do have value, just a lack of ability, or willingness, to say what it is).
Being conflict averse, I see merit in both sides.
I believe that metrics are valuable indicators when well-chosen and properly understood – they can monitor progress, highlight problems and support decision making. But when badly-chosen and used unwisely, they can become dangerous distractions and tools of managerial torture.
Metrics are a hot topic for discussion in UK university coffee rooms just now, because of the impending REF (Research Excellence Framework), but they are also a day-to-day factor in academic management. Journal impact factors, grant success rates, income attribution, are all examples of common metrics that are, from my experience at least, generally poorly understood and badly used.
And then there is the h-index, which is especially interesting as it is to be considered in some REF panels. The h-index seems like such a good idea. A simple, single metric that captures what would otherwise be something rather complex. But that’s exactly the problem. It’s merely an index that captures a particular trend in citation rates. It doesn’t capture quality, nor does it capture creativity, or inventiveness, or independent thinking – all qualities I think most academics would consider important. A simple h-index doesn’t even normalise for number of authors.
So, for the benefit of the pro-metric academics, and to emphasise my point, let me propose an additional metric to be used in REF, staff appointments and promotions (the first always influences the other two, after all). Why not use an index that quantifies intelligence, is well-correlated with academic achievement and has a substantially longer evaluation history than the h-index. It’s called Intelligence Quotient, or IQ.
Yes, that’s what I propose: simply use IQ scores of academic staff as a means of distributing national Higher Education resources. The funding then goes to the HE Institutions with the cleverest staff. Fair, transparent, and appropriate for institutes of learning, don’t you think? We could also use it as a metric for appointing and promoting staff.
You may think the idea absurd, but how much more absurd is it than using h-indices (or impact factors, or grant success rates, etc, etc)? IQ scores are not correlated with creativity or problem solving skills, but neither is the h-index. And IQ tests don’t test for emotional intelligence – but when has that ever been an issue in academia anyway? But universities are meant to be institutes of academic cleverness, and IQ tests do test for that.
Metrics are useful tools. I wouldn’t drive without a dashboard. I wouldn’t use a cash machine without being able to check my balance. I calculate my average grant success rate because it helps me plan ahead. I regularly check to see how many visitors I have to this blog.
The problem with metrics is when people attach some value to the metrics in themselves. When we mistake the indicators as some kind of direct measure of an elusive property, rather than as heuristics tohelp us monitor our own progress – that is when we risk losing sight of the important things – the things we are really trying to achieve.
 You will call me “diplomatic” or “a fence sitter” depending on whether you share my aversion to conflict or not.
 I saw a paper once with so many authors that the average contribution of each was about 12 words. I’m sure they were all vital contributors, of course.
 To be fair, I’ve not seen any studies on the correlation, so it is more correct to say that there is “no evidence that h-index is correlated with creativity”.