Assessing scientists – a crowdsourcing approach?

As the REF draws closer, there are ongoing debates about how we assess scientists and their merits. Such assessments are critical for hiring and promotion  and we are all in agreement that the current system, by which scientists are assessed by the impact factor of the journals they publish in, is not suitable, for reasons outlined by Stephen Curry and picked up again recently by Athene Donald.

Since I’ve been involved quite a bit recently in promotions and hiring, I’ve thought about this a lot. The problem is that we are trying to measure something intangible, and impact factor is the only number we have available – so naturally we use it, even though we all know it’s rubbish.

OK so here’s an alternative idea. It’s totally off the top of my head and thus doubtless deeply flawed but any ideas are better than none, right? And the impact factor discussion, at least in the last few months, has been remarkably devoid of new ideas.

The reason we don’t like impact factors is that we feel they are missing the essence of what makes a good scientist. We all know people who we think are excellent scientists but who, for whatever reason, don’t publish in high impact journals, or don’t publish frequently, or aren’t very highly cited, or whatever. But we trust our judgement because we believe (rightly or wrongly) that our complex minds can assimilate far more information, including factors like how careful is the person in their experiments, how novel are their ideas, how elegant are their experimental designs, how technically challenging their methods etc etc etc – none of which is reflected in the no. or impact factor of their publications.

My idea is that we crowdsource this wisdom and engage is a mass rating exercise where scientists simply judge each other based on their own subjective assessment of merit. Let’s say every two years, or every five years, every scientist in the land is sent the name of (say) 30 scientists in a related area, and are simply asked to rank them, in two lists: (1) How good a scientist do you think this person is, and (2) How important is the contribution you think they have made to the field to date. Each scientist is thus ranked 30 times by their peers, and they get a score equal to the sum or average of those 30 ranks. A scientist who made the top of all 30 lists would be truly outstanding, and one who was at the bottom of all 30 probably unemployable. Then institutions could choose to decide where to draw the boundaries for hiring, promotion etc.

This would all be done in confidence, of course. And a scientist’s own rank wouldn’t be released until they had submitted their ranking of the others. It would be relatively low cost in terms of time, and because specialists would be ranking people in a related area, they would be better placed to make judgements than, say, a hiring committee. In a way it is like turning every scientist into a mini REF panel.

Comments on this idea, or indeed better ideas, welcome.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to Assessing scientists – a crowdsourcing approach?

  1. Just found this post now via your more recent ‘Letter to the Editor’ post. Apologies for being a month late to the party!

    It’s an interesting idea, but it strikes me as one that’ll be hard to get off the ground. Also, I imagine many will be uncomfortable with the idea of rating the scientist rather than the work of the scientist. It’s a little bit like attacking the person vs attacking the argument in a debate.

    We are actually doing something similar to this idea with Publons.com. Our focus is on evaluating the work of the scientist, but the general idea is the same — crowdsourcing the evaluation of academic research in order to, among other things, provide superior metrics to the impact factor.

    Our R&D team are currently pushing out a series of blog posts on this topic — the impact factor and alternative methods of evaluating research — that you might be interested in. Here’s the first in the series:
    http://blog.publons.com/post/47244688262/why-would-we-want-another-way-of-evaluating-scientific

    Cheers,
    Daniel Johnston
    Co-founder of Publons.com

    • Jesse Czekanski-Moir says:

      Re: ranking the scientist vs. ranking the ideas:
      While the object of a debate is to present the best argument, the object of a hiring or tenure decision is to hire/retain the most desirable scientist. Thus, it makes sense to have a ranking system that uses the scientist as the “unit of selection.” While systems like publons.com or F1000.com might be useful for evaluating individual contributions, they still don’t provide a holistic measure of the scientist. Some would argue that scientists are more than just the sum of their pubs.

  2. Dean says:

    I love this idea!
    But it would only work in fields that are so small that everyone knows each other well. Off the top of my head, I can think of a few brilliant scientists, but whereas some are well-known, others are young, working in small labs on long-term, finely-detailed projects that take forever to code/analyse… . I imagine that few people would know who all the young stars in their field are. So it would probably be better if young scientists could be judged on the quality (not the quantity) of their research. But I like the idea…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s