How to structure a presentation

Once you have done some research and have some results, it is time to present these to the outside world. This might be as an abstract, a poster, a data-blitz talk, a full talk, a journal article, a grant proposal … it doesn’t matter, the principle of good scientific presentation is always the same, and that is to follow the zoom-in/zoom-out rule:
(1) The broad question your research is addressing
(2) The specific hypothesis behind your experiment
(3) The prediction(s) made by the hypothesis
(4) An explanation of your experiment and how it is tests for these predictions
(5) Whether the results do or don’t conform to the prediction(s)
(6) Whether the hypothesis is therefore supported
(6) Implications for the broad question your research is addressing
When students first start presenting, they often assume that everyone else in the entire scientific world is more knowledgeable than they are and don’t need to be told the basic background. It feels like this:


In fact, the reality is more like this:


So you need to make it easy on your audience, who will mostly be grappling with unfamiliar concepts as they try to wade through all your material (and it does unfortunately feel like wading, sometimes through treacle).
Question: Your audience may not know much about your research area so it is important to start with some broad background – what is the general question behind your research? Is it “How is memory formed”? “How do we navigate?” “How do we recognise objects”? etc
Hypothesis: You then need to isolate a part of your broad question in the form of a specific hypothesis. A hypothesis starts with a body of knowledge, and addresses a gap in it. Scientists think they know what ought to go in that gap but they aren’t sure – that guess is the hypothesis. Note that a hypothesis is usually not the same as a prediction. A hypothesis might be something like “Memory is formed by changes in connection strength between neurons”. It is relatively general, but less general than the overarching question.
Prediction: Research is about testing hypotheses and to do this the slightly vague hypothesis needs to be concretized in the form of a specific prediction. If my hypothesis is true, then x should happen. If memory is formed by changes in connection strength then blocking these changes should impair memory. It is possible to make many predictions from a hypothesis – the ones you want are (a) testable, and (b) bi-directional. By that, I mean that if the prediction occurs then the hypothesis is supported BUT ALSO if it doesn’t occur then the hypothesis is refuted. Many proposals I see satisfy the first of those but not the second – If the prediction occurs, great, but if it doesn’t then we aren’t any further forward. Hopefully this didn’t happen to you.
Experiment: Then you need to describe your experiment, with the methodology, and explain how it is testing for the prediction.
Results: Do they or don’t they conform to the prediction?
Conclusion about the hypothesis: Is the hypothesis therefore supported or refuted?
Consequence for the big question: Often forgotten, but very important
This all seems rather simple and obvious but it is amazing how many of these steps are often omitted. It is helpful to the reader or listener to have these made very explicit, as headings on your poster, or bold font, or outlined, or in separate paragraphs, or in some other way made visually distinct, to save them mental effort and help them concentrate on your actual results.


Here’s the take-home message:test3

Posted in General, PhD students, Practice of science | Leave a comment

How to write a winning grant proposal

I have been doing a fair bit of grant-reviewing lately, as well as having won a few grants of my own in the past, and have been slowly building up a template of what a winning grant looks like. Here are some thoughts about what does and doesn’t work.

First – think about who are you writing for

Your grant is written for two audiences; a panel of non-experts who have a large number of grants to read on a wide variety of topics, and a handful of highly specialist and critical reviewers who know all the ins and outs of the field and can instantly see right to the heart of any conceptual or technical problems with your proposal. These two groups require an entirely different approach, but they are EQUALLY IMPORTANT. Far too many scientists write only for the referees, but it is the non-specialist panel member assigned to the grant – the “proposer” or “introducer” – who has to be really won over. Furthermore, this winning-over has to happen in the first few lines. Right from the get-go they have to think “wow, this seems like it’s going to be cool” and the last thing they have to be thinking as they read the final line is “wow, that was cool!” I not-infrequently see panels overturn grants that all the reviewers loved. That’s because the panel have the Big Picture and the reviewers don’t, so you really need to excite the panel. (As well as the reviewers).

A lot of scientists really have trouble communicating with non-specialists. I guess the very qualities that make them good scientists – outstanding memory, high attention to detail – make it difficult for them to step back and see their project in soft focus and in the context of all the other scientists and all the other projects. Think to yourself about why Josephine Bloggs should want to fund your proposal when in the same pile of grants is one offering to save the bees, another offering to cure cancer… etc. In her pile of ten grants, she will probably only want to argue for one or two. How are you going to make it yours? It is a huge advertising problem. You have to step back from your myopic highly specialist interests and try and think about how to hook the casual and as-yet-un-committed reader.

ACTION: Go back to the grant you are currently writing and make sure a clueless non-specialist would be able to see why your project is more exciting than saving the bees.


Keep your proposal simple. Introducers with their pile of a dozen or more grants to present will, at the panel meeting, only be able to keep in the forefront of their mind one main thing about your proposal. Make that the deliverable (i.e. the answer to the burning question), and make it a cool one. That deliverable should appear right at the start of your proposal, and also at the end, with the middle simply there to explain it.

It’s not all hopeless – granting bodies really want to fund proposals. They are relatively easily pleased if you try hard!

ACTION: Go back to your proposal and simplify it. Halve the number of words, add some diagrams, replace dense paragraphs with bulleted lists. Then repeat.

What is your question?

A good, catchy proposal offers to answer a question. Humans are naturally curious so if you pique the interest of your assessor with a question and then outline how your project will answer the question, hopefully they will think “Yes! I want to know the answer to this question too!” Also, make sure you present your question by the END OF THE FIRST PARAGRAPH. You have to hook them early. That’s is challenging because they don’t have much info yet but it can be done – must be done, or their attention will wander and their excitement level plummet. Once it has plummeted, it is practically impossible to raise it again.

On a related note, at some point early on you need to state your hypothesis. It’s amazing how many proposals don’t have a hypothesis. Often what they do instead is “aim to characterize”. That is my most hated phrase in proposal-space. Please don’t aim to characterize something! That approach falls in the category of “stamp collecting”, which is to say, worthy but dull. It may well be that you aim to characterize something but try and phrase it as though you are hunting down the answer to a burning question. And have some expectation of what your characterization (which you have called something else) will produce. Otherwise, you have committed another great sin of organising a “fishing expedition”. Your proposal must not be seen as either stamp collecting or a fishing expedition. It is aiming to test a hypothesis in order to answer a burning question, which the reader is now desperate to know the answer to.

ACTION 1: Go back to your proposal and make sure the question is stated by the end of the first paragraph.

ACTION 2: Find the place in your grant that says “test the hypothesis that” and highlight it. Can’t find that phrase? Then put it in! And highlight it.

Your proposal needs to represent a significant step forward

There is a triumvirate of deadly phrases which, if they occur to your assessor, will kill your proposal. We have seen “stamp collecting” and “fishing expedition”. The third one is “incremental”.

An incremental study is one that edges knowledge forward but not in a big way. It is closely related to stamp collecting, which is just accumulating variations on a theme (e.g., we have shown that protein x is involved in process a and now we want to test protein y). An incremental study just slightly stretches an already-defined problem-space rather than pushing the envelope and expanding the space into new problem domains.

Of course, most science is incremental, because you have to start from a sound base of knowledge. The trick is to make it seem like a huge leap forward – it’s all in how you paint the picture. Make it seem like there is a qualitative as well as quantitative difference between what you are proposing to do and what has been done before. You are opening new doors, which will lead to huge new discoveries. What door is your proposal going to open?

ACTION: Go back to your proposal and identify the Great Leap Forward. Put that at both the beginning and the end of your proposal.

Your proposal needs to be win-win

Another fatal flaw I see in many proposals is a forking path of possible outcomes, one or more of which lead to dead-ends. For example, you aim to test whether your hypothesis is true or false – you think it’s true (or it wouldn’t be your hypothesis) but it might be false, in which case we are no further forward. Of course, you as the applicant think such dead-ends are unlikely – but if they are even possible then your grant is dead in the water. Because, in that pile of competing grants are others that all have a win-win structure – the actual outcome is uncertain, but that there will be an outcome is a definite. Nothing terrifies a funder more than the possibility that they can take a quarter-million pounds of tax-payers’ money (and remember, some of these taxpayers can barely afford hot water – public money is a precious commodity) and essentially flush it down the toilet.

You need to try and structure your project so that it distinguishes between two alternative hypotheses, both of which would tell us something interesting. It can be hard to do this but it’s usually possible. If it’s really not possible then your only recourse is to provide so much pilot data that the assessor is reassured that you will get the result you hope for, but it is still a weaker grant (because there is still doubt – if there were no doubt then the experiment wouldn’t need doing at all).

ACTION: Go through your grant and make sure it is (at least mostly) win-win.

Avoid self-aggrandizement

There is usually a place where you have to outline your past achievements and why you would be the best person to do this work. It is really important to get the flavour of this right. “My lab has experience with xxx and was one of the first to yyy” is a graceful way to indicate world-leading status. “My lab is renowned for its world-leading xxx and has, importantly, done yyy, which revolutionized thinking…” is irritating. The difference is that in one case the reader is presented with facts and makes their own judgement about excellence, and in the other case they are told what to think, which is likely to induce them to think the opposite. In a nutshell – avoid describing your own work as prestigious, groundbreaking world-leading, influential, renowned etc, and just say what you did.

Your proposal needs a conclusion

It’s amazing how many proposals end in mid-air, as it were, having finished outlining some boring and minor experimental detail. When your assessor closes the page you want them to have that “wow, that was cool!” thought foremost in their mind, not “meh”. So you need a conclusion, in which you reiterate why your project is cool and answers the burning question you planted into the reader’s mind at the start.

Don’t make it too long – space is precious and you already said all that stuff. You just want a reminder, one last time, so that that is the thing they walk away with.

ACTION: Write “Conclusion” at the end of your proposal, followed by a (few lines of) conclusion.

Choose your referees carefully

It is usual to be invited to suggest potential referees for your proposal. Choose your referees carefully. Of course, you want to recommend people who will be sympathetic to your work, but if you choose people with an obvious partiality – a former PhD or postdoc supervisor for example – this can lead the committee to discount their reference, or at least downweight it, and thus deprive you of an important source of validation.

Bear in mind that people who seem to like your work when you meet them face to face may actually provide highly critical references under their cloak of anonymity, so avoid always using the same small pool of suggestions (unless you tend to always be successful!). Also, do not necessarily hold back from recommending someone you know to be insightful but critical, if their expertise is highly relevant – their opinion will hold greater sway with a committee, and they may also have some useful suggestions for how to improve your project. If they shoot it down, then perhaps it deserved it (an alternative is to get these people to critique your grant before submission!).

It’s worth remembering that a grant is one way of promoting your ideas to the world – a few people in your field are forced to read about your ideas and that helps (if the ideas are good ones) to promote you. Even if the grant is not funded, you can console yourself with the thought that at least it achieved a small piece of communication – a grant proposal is never entirely wasted effort. So think about who in the referee world you want to inject your ideas into.

Give your grant to a non-specialist to critique

Once you have honed your grant to your satisfaction you probably think it’s now perfect. It probably isn’t. Really. Remember that myopia that scientists have, and that you almost certainly have too?

Give your proposal to someone you respect but who works in a different area. Ask them to be highly critical, and take their comments very seriously. If they have critical comments, don’t brush these off on the basis that they are non-experts, because the funding panel are mostly non-experts too. Remember you need to impress both audiences. I have completely rewritten a grant on the basis of a single almost throwaway comment from an insightful non-expert, and I’m convinced that that re-write was what made the final product successful.

ACTION: Give your (now-simplified) proposal to people outside the field for critical review. Take their comments very seriously.

That’s it for now – good luck!

Posted in Uncategorized | 4 Comments

How to make high-res figures for a paper

So you’ve written your paper and made beautiful PowerPoint graphs of your data, and some explanatory schematic diagrams, and it’s all good to go. You log onto the submission website, upload your manuscript and then start on the figures.

At this point you encounter the requirements for the figures, which, admittedly, you should have checked earlier, but you didn’t. So now you discover that the figure has to be 600 dpi (dots-per-inch) and exactly 8 inches wide, and in bitmap form: jpg, tiff etc (a bitmap is an image encoded as coloured pixels, as opposed to vector graphics which is a set of rules for drawing lines etc).

OK, well, no problem, your figure is in PowerPoint so you copy the figure and paste it into your bitmap editor (such as Photoshop, or – my favourite – paint.NET) and re-size it to 600 dpi with the image sizing tool. But now it becomes only two inches wide! So you change the image size to 8 inches but now it looks awful – blurred and fuzzy.

For a given image, there is an unavoidable trade-off between resolution and size – if you increase the resolution you decrease the size, and vice versa. This is obvious when you think about it, because to make an image bigger the program has to insert extra pixels to maintain the dpi, and it has to “guess” (interpolate) to know what colour these should be. To enhance both resolution and size elegantly you need a bigger image. OK, so you go back to PowerPoint and enlarge everything. But now the graphs look awful – the lines and text didn’t scale accordingly and now they look tiny and spidery. So you spend half an hour re-sizing all the lines and enlarging all the text, having to re-position quite a few things, and then try again. Now it works.

Dear god, you have twelve figures… is there a better and faster way to achieve the same thing? Yes there is! That is to make use of an image format in PowerPoint called Enhanced Metafile (EMF). EMF has the advantages of both a bitmap and vector graphics – it lets you scale everything up without altering the relative sizes of lines and text, but it “knows” that lines and text should remain sharp so the re-scaling preserves the nice crisp clarity of your graphs. So here’s a quick way to make a nice, high-res. picture:

1. Select everything that’s to be part of it (if it is the whole page, ctrl-a is a quick way to select-all)  and copy (ctrl-c).

2. Create a new blank slide and paste the copied figure into it as an Enhanced Metafile. You can do this with paste-special or (quicker) ctrl-alt-v, then select EMF. Now your figure will be pasted in as a single image.

3. Now you need to rescale it. You generally need to do this quite a bit, so slide the zoom slider all the way down to the left so the slide shrinks relative to the surrounding workspace. Move your image up to the top left-hand corner of the workspace, and then rescale it by dragging the bottom right corner until the image takes up most of the workspace (the exact amount of rescaling doesn’t matter at this stage so long as it is bigger than you will ultimately need). Then select the image and copy it.

4. Open a new file on paint.NET (or whatever), set the resolution you want and paste your figure onto the new page it makes for you. You might run into a problem if the rescaled image is too big (in megabyte terms) for your clipboard (the thing that holds the image in “working memory” between the copying and pasting). If this happens and you get a clipboard error, you may need to save the image as a bitmap straight from PowerPoint onto disk, then insert from the saved file into paint.NET. PowerPoint will only save that part of the image that is within the slide boundary, so you need to enlarge the slide (maybe create a new file so as not to mess around with the one your graphs are in) and set a custom slide size as big as you can, making sure your image fits within its boundaries.

5.  Now scale the image down to the size you want (if it was already smaller, you need to go back to PowerPoint and enlarge the EMF image a bit further). Then save in the format you need. Voila! A nice, high-res picture.

Posted in General, PhD students, Practice of science | Tagged , , , | 1 Comment

Redefining “professor” won’t fix the leaky pipeline

The leaky pipeline is a well-described phenomenon in which women selectively fail to progress in their careers. In scientific academia, the outcome is that while women make up >50% undergraduates across science as a whole, they make up <20% professors.

The reasons are many and varied, but three days ago, a group of Cambridge academics proposed a possible solution. The problem, they said, is this:

“Conventional success in academia, for example a promotion from reader to professor, can often seem as if it is framed by quite rigid outcomes – a paper published in a leading journal, or the size and frequency of research grants – at the expense of other skill sets and attributes. Those engaged in teaching, administration and public engagement, to name just three vital activities, can be pushed to the margins when specific, quantifiable outcomes take all….Women value a broader spectrum of work-based competencies that do not flourish easily under the current system.”

And the solution, they ventured, might be this:

“…we think there are opportunities to build into assessment processes – for example, academic promotions – additional factors that reward contribution from a much wider range of personality and achievement types….A broader definition of success within the sector will bring benefits not only to women – and indeed men – working in universities, but also to society as a whole.”

It’s hard to argue with the notion that success can take many different forms. However, a jaundiced reading of the dons’ proposal looks like this: “To become a professor you need to do a lot of research. Women don’t do as much research because they take on other, service roles within the academic community. Therefore we should redefine “professor” to include not just research, but also service roles”.

Now, I am all in favour of fixing the leaky pipeline, but this proposal in its simplest reading seems problematic to me. It seems – to stretch the pipeline analogy to breaking point – a bit like re-defining “pipe” to include the trench in which it sits. The obvious outcome of the proposal, were it to be implemented, is that the title “professor” comes to lose its meaning of “someone who has attained heights of academic excellence” and will start to mean “someone who has achieved a lot in their university job”. How, then, will we signify the heights-of-academic-excellence person? Well, two scenarios present themselves. One is that the community finds a new word. The other is that the old word – “professor” – acquires contextual modifiers, the most obvious being “male professor” (someone who has attained the heights of academic excellence) and “female professor” (someone who has done a lot of community service and is generally a good egg). Those female professors who have excelled in research will go unrecognised. That can’t be good for female science careers.

If women are failing to become professors because they are not meeting research-based criteria (and I would dispute that there is much evidence for this – certainly at UCL we see no differences in male vs female research productivity except at the very very top – the super-profs) then we need to look at why this might be. If it’s the case that (some) women like to mix non-research activities into what they do, then we should definitely find ways to reward that. We could invent a new title that encompasses academic-related but non-research activities, and – more importantly – we should attach a hefty pay increment to the position. Money is money and it doesn’t have gendered connotations, and won’t be devalued if given to women. But the coveted title of professor should continue to mean what it has always meant – research excellence.

Posted in Science admin, Uncategorized | 2 Comments

Is boycotting “luxury journals” really the answer?

Recently, the newly crowned Nobel laureate Randy Schekman caused a controversy by advising scientists to boycott what he called the “luxury journals” Science, Nature and Cell, which he claims create distorting incentives in the field. He said “Just as Wall Street needs to break the hold of the bonus culture, which drives risk-taking that is rational for individuals but damaging to the financial system, so science must break the tyranny of the luxury journals.” His argument is that pressure to publish in this small number of highly selective journals creates fashionable “bubbles” of research and also sometimes causes scientists to cut corners, leading to a higher retraction rate for these journals. He said “I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.”

This comment has had a predictable well-he-can-say-that-now reaction from commentators, one of whom outright called him a hypocrite, who have not been slow to observe that Schekman himself has published dozens of papers in those same luxury journals. It might also be argued that as editor of a rival journal, the Wellcome Trust’s highly selective (some might say elitist, perhaps even “luxury”) open access journal eLife, using a high-profile Nobel platform to disparage competitors is somewhat – well, questionable. The editor of Nature issued a defence of their editorial policy, saying they are driven by scientific significance and not impact or media coverage, while Science and Cell have so far maintained a dignified silence.

Undeterred by the controversy, Schekman subsequently issued an email invitation to scientists: “How do you think scientific journals should help advance science and careers?” Leaving aside questions of moral high ground with regard to the context of this discussion, he has a good point. The question of how journals influence the process of science is a valid one, and it seems that since this is now in the open, and receiving high profile attention, we should give it some thought.

So, folks, what do we think, as a collective? Are the Big Three really damaging science? Will boycotting them solve any problems?

Myself, I think not. Journals are business enterprises, they do what they do, and while they perhaps create incentives for us, it’s even more the case that we create incentives for them. We have decided, as a profession, to bow down to the Impact Factor god, and treat it as a proxy for scientific quality. Thus, quality journals who aim to market the best research will try and accumulate impact factor, and the most successful journals will get the best IF. Of course they will, and you would too if you were running a journal business. eLife does too, despite its lofty claims to be above all that. So does the UK’s national science rating enterprise the REF, despite its claims to the contrary. We all bow down to the IF god, because right now there is nothing else.

This is a problem, because overweighting of a single parameter is bad for complexity in a system. If you were trying to create a forest and selected only the fastest growing trees, you would pretty soon end up with only pine. Science right now resembles a pine forest, academically speaking – it is overwhelmingly white, male, western and reductionist. Many of the planet’s best brains are locked out of science because we have failed to create an ecosystem in which they can flourish. If we want science to have the intellectual biodiversity of a rainforest we need to judge it by more parameters than just that one single one.

OK so we need more parameters. Like what? Well here’s one more, and it offers something I think journals – to go back to Schekman’s email question – could usefully do. That is, to publish, alongside all the other details of the research, the one possibly most important detail – how much did that research cost the funder?

Currently, we assess only the output from a lab, without taking account of the input. We measure a scientist by how many S/N/C papers they publish, not by how many they published per taxpayer’s dollar (or pound, or whatever). I don’t know why we don’t consider this – possibly from the misguided belief that science should be above all that – but science is a closed-system, zero-sum game and there is only so much money available to fund it. The way things are at present, if a young scientist is successful early on in gaining a large amount of money – let’s say they win a prestigious fellowship – then they produce a large output (of course, because they had a lot of money) and are then confirmed as being a good scientist. Then people give them more money, and on it goes, and their output accelerates. If someone has less money – let’s say they got off to a slow start due to experimental misfortune, or are from a developing country, or they spent time having babies and accrued fewer grants meanwhile – then their outputs are fewer and lower impact (it takes time and money to produce a Big Three after all) and funding, and therefore output, slowly tails off.

If we rated scientists not just by raw output but by efficiency then we would get a much different picture. Now, not only large well-funded labs would look good. A small lab that was producing relatively few papers but spending hardly any money in doing so would look as good as a large, well-funded lab that – while it produced more papers – spent a lot more money in the process. If we rewarded efficiency rather than raw output we would remove the incentive to grow a group larger than the scientist felt comfortable managing. It would encourage thrift and careful use of taxpayer-funded resources. It would put low-budget low-impact science such as psychophysics on the same esteem footing as high-budget high-impact science like fMRI. The science ecosystem would start to diversify – we would become more than just pine trees. We would, I believe, ultimately get more science from our limited planetary resources.

So, Dr Schekman, that’s my answer to what journals can do – publish not just the citations but also the efficiency of their articles. Let’s not boycott anything – let’s just change our assessment of what we think is the important measure here.

Posted in Practice of science | Tagged , , ,

The h-index – what does it really measure?

OK I’ll admit it, I look at people’s h-indices. It’s a secret, shameful habit. I do it, even though I know I shouldn’t  I do it even when I’m trying not to. I sometimes do it to myself, and then get depressed. You probably do it to yourself too. But here is why you shouldn’t (or not too much).

The h-index, for those who really haven’t heard of it, is the latest in the panoply of torture instruments that are wheeled out from time to time to make the lives of scientists ever more miserable. It was invented by the physicist Jorge E. Hirsch to provide a better method of measuring scientific productivity. Although scientists produce papers, and papers that are cited by other scientists, it is hard to judge someone based on bald statistics like how many papers they publish or how many citations they have accrued because these quantities are so fickle – one person might publish vast amounts of rubbish, and another might have one really notable paper that garnered quite a few citations, but that doesn’t make either an all-round good scientist. Maybe not one you’d want to hire, or promote. So, Hirsch came up with the idea of developing a measure that takes both the number and citedness (impact) of their papers, and provides an index that more fairly reflects someone’s sustained productivity over a period of time. It is very simple to calculate – you line up a person’s publications in order of how many times each has been cited and then work your way along the list, starting with the most-cited paper and counting as you go. Eventually you get to a paper that is the nth in the list and it has n (or maybe only a few more) citations – n is your h-index. A quicker way of thinking about it is: with each additional paper in your list the count goes up and the citations go down, and the point where those lines cross is the h-index. Just about every practising scientist can tell you their h-index* more quickly than their shoe size**.

The h-index is highly quantitative and can be generated easily so of course, it is a great way of quickly getting a handle on someone’s productivity. Or so it seems on the surface. A closer look reveals a slightly different story. I was given pause for thought when I discovered someone in my department had an h-index of 49, and had to wipe tears from my eyes when I discovered someone’s at 60-something. Think about it, that person had so many papers that even their 60th paper had been cited 60 times! That’s quite an output.

Do people like that really work that much harder, or more productively, than the rest of us? I was a little bit sceptical, but I noticed h-indices were cropping up ever more-frequently on people’s CVs so I thought I ought to look a little more closely at how this thing behaves. It’s a strange beast, the h-index. For one thing, it inevitably gets bigger as you age (unless your work is really ineffectual) so people who are just starting out in their scientific lives have small, erratic and uninterpretable h-indices. I quickly learned not to use it to evaluate early-career researchers. It also turns out, because of its dependence on citations, to vary a lot across disciplines. Someone working in an area of clinical relevance may find that a rather modest paper (scientifically speaking) is cited by hundreds or thousands of clinicians, while someone working on (for example) social communication in Apis mellifera may struggle to get their citations into the tens or hundreds. Is the latter person less productive or less useful? To some people perhaps yes, but tell that to the Nobel Committee who awarded the 1973 prize to Karl von Frisch for his work on the waggle dance in the, you guessed it, honey bee Apis mellifera.

The h-index also turns out to work very much in favour of people who collaborate a lot. So, someone who is a minor author on a bunch of other people’s papers can rapidly accrue a big h-index even though their own output, from their own labs funded by grants they have obtained themselves, is mediocre. This can be a particular problem for women, who may have fewer networking opportunities in their child-rearing years, or for researchers in small far-flung countries. The association of the h-index with collaborativeness also produces rather big inequalities across disciplines. Some disciplines are inherently collaborative – if you are a molecular biologist who has made a mouse with a useful gene mutation, or a computational biologist with analytic skills that many different scientists find useful, then you may find yourself on a great many papers. If the h-index is the only measure of output, then molecular and computational biologists are doing a lot more work than some of the rest of us. Which of course we know isn’t true.

Why does this matter? Well it shouldn’t, because much of the above is obvious, but it does because even though we can see the h-index has limitations, we still can’t help being influenced by it. We’re just hairless apes, barely out of the forest, and size matters to our little brains. If someone has a big h-index then we can tell ourselves until we are blue in the face that it’s just because they collaborate a lot, and have fingers in many pies, but we are still impressed. And conversely for someone whose h-index is a little… well, shrivelled. This matters because hiring and promotions committees are also populated by these little hairless apes, and are affected by such things whether they think they are or not. But the h-index is information of a sort – it’s better than nothing, and better than a pure publications count or a pure citations count – and so it’s not going to go away, at least not tomorrow. We just have to keep reminding ourselves, over and over:  it’s not so much how big it is, but rather what you do with it, that matters.

*I’m not telling you mine, you can look it up yourself!

** 39

Posted in Science admin | Tagged | 3 Comments

Why scientists should tweet

Yesterday I attended a Twitter tutorial organised by Jenny Rohn and Uta Frith for the UCL Women group. It reminded me afresh how much I value Twitter, and also how much persuasion I had needed to join it in the first place. I thought I’d jot down a few thoughts about why I think scientists should tweet.

I first tried Twitter out 3 or 4 years ago and resigned shortly thereafter in disgust. It seemed full of the most banal trivia. I am, I thought to myself loftily, much too busy and intellectually deep for this mindless drivel clearly designed for the short-of-attention span with not much to think about. With which arrogance I thus deserved to miss out on a couple of years of useful fun.

I rejoined it last year on Jenny Rohn’s advice, and this time I had her online tutorial to guide me. The trick, I discovered, is to follow the right people. While the celebs of this world really do tweet about what they had for breakfast, it turns out that scientists tweet about, prepare to be surprised, science. Lots of it. In short, snappy chunks that can be grasped in a single saccade.

Furthermore, the insight that I hadn’t made the first time around, Twitter (useful Twitter) is not so much the comments as the links. It turns out that Twitter for the busy scientist is a vast, seething ocean of pointers – links to this paper or that paper or this blog or that review or this commentary or that scathing take-down or… it’s really a vast roadmap of where the cool stuff is happening right now. The immediacy is the best thing about it – I find out about new findings far faster on Twitter than I do through the usual routes, and I also find out what people think about what’s happening – and by “people” I mean “experts”.

Twitter, like science, is about information, and information is power. Out there in Twitter world, the young and cyber-savvy are busy exchanging knowledge and ideas at an incredible rate and any scientist who doesn’t join in is, frankly, missing out. When I got angry at a paper I was reviewing one day, I tweeted this:


This generated a flurry of sympathetic conversation, I responded with a blogpost that set out my frustration in more detail, and within three days I had been contacted by several journal editors saying they were responding to our concerns and taking them to higher mangement discussions. One editor, of a new online journal, came to my office to collect my views in person. Such is the power of Twitter, to spread information and facilitate change.

So I urge all you recalcitrant scientists who are too busy for this mindless drivel – think again, and do give it a try. And don’t just “lurk” (read but say nothing) – tweet. Be part of the knowledge marketplace. Tell us about your work. Read and comment on those papers so we don’t all have to! Tell us what you think. And every now and then, if you have something *really* great for breakfast, tell us that too…

Posted in Uncategorized | 6 Comments