The h-index – what does it really measure?

OK I’ll admit it, I look at people’s h-indices. It’s a secret, shameful habit. I do it, even though I know I shouldn’t  I do it even when I’m trying not to. I sometimes do it to myself, and then get depressed. You probably do it to yourself too. But here is why you shouldn’t (or not too much).

The h-index, for those who really haven’t heard of it, is the latest in the panoply of torture instruments that are wheeled out from time to time to make the lives of scientists ever more miserable. It was invented by the physicist Jorge E. Hirsch to provide a better method of measuring scientific productivity. Although scientists produce papers, and papers that are cited by other scientists, it is hard to judge someone based on bald statistics like how many papers they publish or how many citations they have accrued because these quantities are so fickle – one person might publish vast amounts of rubbish, and another might have one really notable paper that garnered quite a few citations, but that doesn’t make either an all-round good scientist. Maybe not one you’d want to hire, or promote. So, Hirsch came up with the idea of developing a measure that takes both the number and citedness (impact) of their papers, and provides an index that more fairly reflects someone’s sustained productivity over a period of time. It is very simple to calculate – you line up a person’s publications in order of how many times each has been cited and then work your way along the list, starting with the most-cited paper and counting as you go. Eventually you get to a paper that is the nth in the list and it has n (or maybe only a few more) citations – n is your h-index. A quicker way of thinking about it is: with each additional paper in your list the count goes up and the citations go down, and the point where those lines cross is the h-index. Just about every practising scientist can tell you their h-index* more quickly than their shoe size**.

The h-index is highly quantitative and can be generated easily so of course, it is a great way of quickly getting a handle on someone’s productivity. Or so it seems on the surface. A closer look reveals a slightly different story. I was given pause for thought when I discovered someone in my department had an h-index of 49, and had to wipe tears from my eyes when I discovered someone’s at 60-something. Think about it, that person had so many papers that even their 60th paper had been cited 60 times! That’s quite an output.

Do people like that really work that much harder, or more productively, than the rest of us? I was a little bit sceptical, but I noticed h-indices were cropping up ever more-frequently on people’s CVs so I thought I ought to look a little more closely at how this thing behaves. It’s a strange beast, the h-index. For one thing, it inevitably gets bigger as you age (unless your work is really ineffectual) so people who are just starting out in their scientific lives have small, erratic and uninterpretable h-indices. I quickly learned not to use it to evaluate early-career researchers. It also turns out, because of its dependence on citations, to vary a lot across disciplines. Someone working in an area of clinical relevance may find that a rather modest paper (scientifically speaking) is cited by hundreds or thousands of clinicians, while someone working on (for example) social communication in Apis mellifera may struggle to get their citations into the tens or hundreds. Is the latter person less productive or less useful? To some people perhaps yes, but tell that to the Nobel Committee who awarded the 1973 prize to Karl von Frisch for his work on the waggle dance in the, you guessed it, honey bee Apis mellifera.

The h-index also turns out to work very much in favour of people who collaborate a lot. So, someone who is a minor author on a bunch of other people’s papers can rapidly accrue a big h-index even though their own output, from their own labs funded by grants they have obtained themselves, is mediocre. This can be a particular problem for women, who may have fewer networking opportunities in their child-rearing years, or for researchers in small far-flung countries. The association of the h-index with collaborativeness also produces rather big inequalities across disciplines. Some disciplines are inherently collaborative – if you are a molecular biologist who has made a mouse with a useful gene mutation, or a computational biologist with analytic skills that many different scientists find useful, then you may find yourself on a great many papers. If the h-index is the only measure of output, then molecular and computational biologists are doing a lot more work than some of the rest of us. Which of course we know isn’t true.

Why does this matter? Well it shouldn’t, because much of the above is obvious, but it does because even though we can see the h-index has limitations, we still can’t help being influenced by it. We’re just hairless apes, barely out of the forest, and size matters to our little brains. If someone has a big h-index then we can tell ourselves until we are blue in the face that it’s just because they collaborate a lot, and have fingers in many pies, but we are still impressed. And conversely for someone whose h-index is a little… well, shrivelled. This matters because hiring and promotions committees are also populated by these little hairless apes, and are affected by such things whether they think they are or not. But the h-index is information of a sort – it’s better than nothing, and better than a pure publications count or a pure citations count – and so it’s not going to go away, at least not tomorrow. We just have to keep reminding ourselves, over and over:  it’s not so much how big it is, but rather what you do with it, that matters.

*I’m not telling you mine, you can look it up yourself!

** 39

Advertisements
This entry was posted in Science admin and tagged . Bookmark the permalink.

3 Responses to The h-index – what does it really measure?

  1. Dennis says:

    You are right. Overall, the h-index is too simple a measurement. As always, a metric that is supposed to give a quick measure for a complex issue needs a complex formula with many carefully weighted factors for each citation, before it is fed into the h-index.

    For example, the problem of the age of a paper. In my opinion the citation count should be leaky to account for the accumulation of citations simply because the paper was out longer – maybe by applying a shrinking multiplication factor for each year the paper is out. You can add another shrinking factor for each time a paper was cited by the same authors to reduce the effect of buddy-citations.

    Then there is the effect of the size of the field that you touched on. How can you correct for the number of scientists working on the same topic? You could, for example, cluster all published papers by keywords. The cited paper will fall into one cluster and you can calculate the number of citations within this cluster relative to the total number of papers in this cluster. With this method you can further analyze the citations outside of the paper’s own cluster to estimate it’s impact on other fields – and then you can take into account how close the clusters are. The further away the clusters, the higher the value of the paper for general science. This can be made as complex as you wish.

    The collaboration effect where middle authors get too much credit: Author lists are crap. They are a terrible ‘solution’ for what should be a contributor list. I suggested somewhere else, that the author list should only consist of the people who actually wrote the thing. In addition there should be a list of people who contributed to a study and a formalized addendum for the kind of contribution. This list can replace the current author list. You can then weight a paper’s value for the index by that contribution.

  2. Pingback: early career academics and the fuzz about glam | Eckmeier.De|nnis

  3. H. says:

    Nice piece! Just one thing that I thought about when reading the following:

    “The h-index also turns out to work very much in favour of people who collaborate a lot. (…)This can be a particular problem for women, who may have fewer networking opportunities in their child-rearing years, or for researchers in small far-flung countries.”

    Or perhaps:

    -This can be a particular problem for men, who have a lesser desire and/or skill to collaborate/ network.

    -This can be a particular problem for introverts, who have a lesser desire to collaborate/network.

    -This can be a particular problem for the ‘non-slick-manager-type-researcher’, who have a lesser desire and/or skill to engage in superficial contacts upon which networking might especially be based (any research on this?)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s