How to make high-res figures for a paper

So you’ve written your paper and made beautiful PowerPoint graphs of your data, and some explanatory schematic diagrams, and it’s all good to go. You log onto the submission website, upload your manuscript and then start on the figures.

At this point you encounter the requirements for the figures, which, admittedly, you should have checked earlier, but you didn’t. So now you discover that the figure has to be 600 dpi (dots-per-inch) and exactly 8 inches wide, and in bitmap form: jpg, tiff etc (a bitmap is an image encoded as coloured pixels, as opposed to vector graphics which is a set of rules for drawing lines etc).

OK, well, no problem, your figure is in PowerPoint so you copy the figure and paste it into your bitmap editor (such as Photoshop, or – my favourite – paint.NET) and re-size it to 600 dpi with the image sizing tool. But now it becomes only two inches wide! So you change the image size to 8 inches but now it looks awful – blurred and fuzzy.

For a given image, there is an unavoidable trade-off between resolution and size – if you increase the resolution you decrease the size, and vice versa. This is obvious when you think about it, because to make an image bigger the program has to insert extra pixels to maintain the dpi, and it has to “guess” (interpolate) to know what colour these should be. To enhance both resolution and size elegantly you need a bigger image. OK, so you go back to PowerPoint and enlarge everything. But now the graphs look awful – the lines and text didn’t scale accordingly and now they look tiny and spidery. So you spend half an hour re-sizing all the lines and enlarging all the text, having to re-position quite a few things, and then try again. Now it works.

Dear god, you have twelve figures… is there a better and faster way to achieve the same thing? Yes there is! That is to make use of an image format in PowerPoint called Enhanced Metafile (EMF). EMF has the advantages of both a bitmap and vector graphics – it lets you scale everything up without altering the relative sizes of lines and text, but it “knows” that lines and text should remain sharp so the re-scaling preserves the nice crisp clarity of your graphs. So here’s a quick way to make a nice, high-res. picture:

1. Select everything that’s to be part of it (if it is the whole page, ctrl-a is a quick way to select-all)  and copy (ctrl-c).

2. Create a new blank slide and paste the copied figure into it as an Enhanced Metafile. You can do this with paste-special or (quicker) ctrl-alt-v, then select EMF. Now your figure will be pasted in as a single image.

3. Now you need to rescale it. You generally need to do this quite a bit, so slide the zoom slider all the way down to the left so the slide shrinks relative to the surrounding workspace. Move your image up to the top left-hand corner of the workspace, and then rescale it by dragging the bottom right corner until the image takes up most of the workspace (the exact amount of rescaling doesn’t matter at this stage so long as it is bigger than you will ultimately need). Then select the image and copy it.

4. Open a new file on paint.NET (or whatever), set the resolution you want and paste your figure onto the new page it makes for you. You might run into a problem if the rescaled image is too big (in megabyte terms) for your clipboard (the thing that holds the image in “working memory” between the copying and pasting). If this happens and you get a clipboard error, you may need to save the image as a bitmap straight from PowerPoint onto disk, then insert from the saved file into paint.NET. PowerPoint will only save that part of the image that is within the slide boundary, so you need to enlarge the slide (maybe create a new file so as not to mess around with the one your graphs are in) and set a custom slide size as big as you can, making sure your image fits within its boundaries.

5.  Now scale the image down to the size you want (if it was already smaller, you need to go back to PowerPoint and enlarge the EMF image a bit further). Then save in the format you need. Voila! A nice, high-res picture.

Posted in General, PhD students, Practice of science | Tagged , , , | 1 Comment

Redefining “professor” won’t fix the leaky pipeline

The leaky pipeline is a well-described phenomenon in which women selectively fail to progress in their careers. In scientific academia, the outcome is that while women make up >50% undergraduates across science as a whole, they make up <20% professors.

The reasons are many and varied, but three days ago, a group of Cambridge academics proposed a possible solution. The problem, they said, is this:

“Conventional success in academia, for example a promotion from reader to professor, can often seem as if it is framed by quite rigid outcomes – a paper published in a leading journal, or the size and frequency of research grants – at the expense of other skill sets and attributes. Those engaged in teaching, administration and public engagement, to name just three vital activities, can be pushed to the margins when specific, quantifiable outcomes take all….Women value a broader spectrum of work-based competencies that do not flourish easily under the current system.”

And the solution, they ventured, might be this:

“…we think there are opportunities to build into assessment processes – for example, academic promotions – additional factors that reward contribution from a much wider range of personality and achievement types….A broader definition of success within the sector will bring benefits not only to women – and indeed men – working in universities, but also to society as a whole.”

It’s hard to argue with the notion that success can take many different forms. However, a jaundiced reading of the dons’ proposal looks like this: “To become a professor you need to do a lot of research. Women don’t do as much research because they take on other, service roles within the academic community. Therefore we should redefine “professor” to include not just research, but also service roles”.

Now, I am all in favour of fixing the leaky pipeline, but this proposal in its simplest reading seems problematic to me. It seems – to stretch the pipeline analogy to breaking point – a bit like re-defining “pipe” to include the trench in which it sits. The obvious outcome of the proposal, were it to be implemented, is that the title “professor” comes to lose its meaning of “someone who has attained heights of academic excellence” and will start to mean “someone who has achieved a lot in their university job”. How, then, will we signify the heights-of-academic-excellence person? Well, two scenarios present themselves. One is that the community finds a new word. The other is that the old word – “professor” – acquires contextual modifiers, the most obvious being “male professor” (someone who has attained the heights of academic excellence) and “female professor” (someone who has done a lot of community service and is generally a good egg). Those female professors who have excelled in research will go unrecognised. That can’t be good for female science careers.

If women are failing to become professors because they are not meeting research-based criteria (and I would dispute that there is much evidence for this – certainly at UCL we see no differences in male vs female research productivity except at the very very top – the super-profs) then we need to look at why this might be. If it’s the case that (some) women like to mix non-research activities into what they do, then we should definitely find ways to reward that. We could invent a new title that encompasses academic-related but non-research activities, and – more importantly – we should attach a hefty pay increment to the position. Money is money and it doesn’t have gendered connotations, and won’t be devalued if given to women. But the coveted title of professor should continue to mean what it has always meant – research excellence.

Posted in Science admin, Uncategorized | 2 Comments

Is boycotting “luxury journals” really the answer?

Recently, the newly crowned Nobel laureate Randy Schekman caused a controversy by advising scientists to boycott what he called the “luxury journals” Science, Nature and Cell, which he claims create distorting incentives in the field. He said “Just as Wall Street needs to break the hold of the bonus culture, which drives risk-taking that is rational for individuals but damaging to the financial system, so science must break the tyranny of the luxury journals.” His argument is that pressure to publish in this small number of highly selective journals creates fashionable “bubbles” of research and also sometimes causes scientists to cut corners, leading to a higher retraction rate for these journals. He said “I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.”

This comment has had a predictable well-he-can-say-that-now reaction from commentators, one of whom outright called him a hypocrite, who have not been slow to observe that Schekman himself has published dozens of papers in those same luxury journals. It might also be argued that as editor of a rival journal, the Wellcome Trust’s highly selective (some might say elitist, perhaps even “luxury”) open access journal eLife, using a high-profile Nobel platform to disparage competitors is somewhat – well, questionable. The editor of Nature issued a defence of their editorial policy, saying they are driven by scientific significance and not impact or media coverage, while Science and Cell have so far maintained a dignified silence.

Undeterred by the controversy, Schekman subsequently issued an email invitation to scientists: “How do you think scientific journals should help advance science and careers?” Leaving aside questions of moral high ground with regard to the context of this discussion, he has a good point. The question of how journals influence the process of science is a valid one, and it seems that since this is now in the open, and receiving high profile attention, we should give it some thought.

So, folks, what do we think, as a collective? Are the Big Three really damaging science? Will boycotting them solve any problems?

Myself, I think not. Journals are business enterprises, they do what they do, and while they perhaps create incentives for us, it’s even more the case that we create incentives for them. We have decided, as a profession, to bow down to the Impact Factor god, and treat it as a proxy for scientific quality. Thus, quality journals who aim to market the best research will try and accumulate impact factor, and the most successful journals will get the best IF. Of course they will, and you would too if you were running a journal business. eLife does too, despite its lofty claims to be above all that. So does the UK’s national science rating enterprise the REF, despite its claims to the contrary. We all bow down to the IF god, because right now there is nothing else.

This is a problem, because overweighting of a single parameter is bad for complexity in a system. If you were trying to create a forest and selected only the fastest growing trees, you would pretty soon end up with only pine. Science right now resembles a pine forest, academically speaking – it is overwhelmingly white, male, western and reductionist. Many of the planet’s best brains are locked out of science because we have failed to create an ecosystem in which they can flourish. If we want science to have the intellectual biodiversity of a rainforest we need to judge it by more parameters than just that one single one.

OK so we need more parameters. Like what? Well here’s one more, and it offers something I think journals – to go back to Schekman’s email question – could usefully do. That is, to publish, alongside all the other details of the research, the one possibly most important detail – how much did that research cost the funder?

Currently, we assess only the output from a lab, without taking account of the input. We measure a scientist by how many S/N/C papers they publish, not by how many they published per taxpayer’s dollar (or pound, or whatever). I don’t know why we don’t consider this – possibly from the misguided belief that science should be above all that – but science is a closed-system, zero-sum game and there is only so much money available to fund it. The way things are at present, if a young scientist is successful early on in gaining a large amount of money – let’s say they win a prestigious fellowship – then they produce a large output (of course, because they had a lot of money) and are then confirmed as being a good scientist. Then people give them more money, and on it goes, and their output accelerates. If someone has less money – let’s say they got off to a slow start due to experimental misfortune, or are from a developing country, or they spent time having babies and accrued fewer grants meanwhile – then their outputs are fewer and lower impact (it takes time and money to produce a Big Three after all) and funding, and therefore output, slowly tails off.

If we rated scientists not just by raw output but by efficiency then we would get a much different picture. Now, not only large well-funded labs would look good. A small lab that was producing relatively few papers but spending hardly any money in doing so would look as good as a large, well-funded lab that – while it produced more papers – spent a lot more money in the process. If we rewarded efficiency rather than raw output we would remove the incentive to grow a group larger than the scientist felt comfortable managing. It would encourage thrift and careful use of taxpayer-funded resources. It would put low-budget low-impact science such as psychophysics on the same esteem footing as high-budget high-impact science like fMRI. The science ecosystem would start to diversify – we would become more than just pine trees. We would, I believe, ultimately get more science from our limited planetary resources.

So, Dr Schekman, that’s my answer to what journals can do – publish not just the citations but also the efficiency of their articles. Let’s not boycott anything – let’s just change our assessment of what we think is the important measure here.

Posted in Practice of science | Tagged , , ,

The h-index – what does it really measure?

OK I’ll admit it, I look at people’s h-indices. It’s a secret, shameful habit. I do it, even though I know I shouldn’t  I do it even when I’m trying not to. I sometimes do it to myself, and then get depressed. You probably do it to yourself too. But here is why you shouldn’t (or not too much).

The h-index, for those who really haven’t heard of it, is the latest in the panoply of torture instruments that are wheeled out from time to time to make the lives of scientists ever more miserable. It was invented by the physicist Jorge E. Hirsch to provide a better method of measuring scientific productivity. Although scientists produce papers, and papers that are cited by other scientists, it is hard to judge someone based on bald statistics like how many papers they publish or how many citations they have accrued because these quantities are so fickle – one person might publish vast amounts of rubbish, and another might have one really notable paper that garnered quite a few citations, but that doesn’t make either an all-round good scientist. Maybe not one you’d want to hire, or promote. So, Hirsch came up with the idea of developing a measure that takes both the number and citedness (impact) of their papers, and provides an index that more fairly reflects someone’s sustained productivity over a period of time. It is very simple to calculate – you line up a person’s publications in order of how many times each has been cited and then work your way along the list, starting with the most-cited paper and counting as you go. Eventually you get to a paper that is the nth in the list and it has n (or maybe only a few more) citations – n is your h-index. A quicker way of thinking about it is: with each additional paper in your list the count goes up and the citations go down, and the point where those lines cross is the h-index. Just about every practising scientist can tell you their h-index* more quickly than their shoe size**.

The h-index is highly quantitative and can be generated easily so of course, it is a great way of quickly getting a handle on someone’s productivity. Or so it seems on the surface. A closer look reveals a slightly different story. I was given pause for thought when I discovered someone in my department had an h-index of 49, and had to wipe tears from my eyes when I discovered someone’s at 60-something. Think about it, that person had so many papers that even their 60th paper had been cited 60 times! That’s quite an output.

Do people like that really work that much harder, or more productively, than the rest of us? I was a little bit sceptical, but I noticed h-indices were cropping up ever more-frequently on people’s CVs so I thought I ought to look a little more closely at how this thing behaves. It’s a strange beast, the h-index. For one thing, it inevitably gets bigger as you age (unless your work is really ineffectual) so people who are just starting out in their scientific lives have small, erratic and uninterpretable h-indices. I quickly learned not to use it to evaluate early-career researchers. It also turns out, because of its dependence on citations, to vary a lot across disciplines. Someone working in an area of clinical relevance may find that a rather modest paper (scientifically speaking) is cited by hundreds or thousands of clinicians, while someone working on (for example) social communication in Apis mellifera may struggle to get their citations into the tens or hundreds. Is the latter person less productive or less useful? To some people perhaps yes, but tell that to the Nobel Committee who awarded the 1973 prize to Karl von Frisch for his work on the waggle dance in the, you guessed it, honey bee Apis mellifera.

The h-index also turns out to work very much in favour of people who collaborate a lot. So, someone who is a minor author on a bunch of other people’s papers can rapidly accrue a big h-index even though their own output, from their own labs funded by grants they have obtained themselves, is mediocre. This can be a particular problem for women, who may have fewer networking opportunities in their child-rearing years, or for researchers in small far-flung countries. The association of the h-index with collaborativeness also produces rather big inequalities across disciplines. Some disciplines are inherently collaborative – if you are a molecular biologist who has made a mouse with a useful gene mutation, or a computational biologist with analytic skills that many different scientists find useful, then you may find yourself on a great many papers. If the h-index is the only measure of output, then molecular and computational biologists are doing a lot more work than some of the rest of us. Which of course we know isn’t true.

Why does this matter? Well it shouldn’t, because much of the above is obvious, but it does because even though we can see the h-index has limitations, we still can’t help being influenced by it. We’re just hairless apes, barely out of the forest, and size matters to our little brains. If someone has a big h-index then we can tell ourselves until we are blue in the face that it’s just because they collaborate a lot, and have fingers in many pies, but we are still impressed. And conversely for someone whose h-index is a little… well, shrivelled. This matters because hiring and promotions committees are also populated by these little hairless apes, and are affected by such things whether they think they are or not. But the h-index is information of a sort – it’s better than nothing, and better than a pure publications count or a pure citations count – and so it’s not going to go away, at least not tomorrow. We just have to keep reminding ourselves, over and over:  it’s not so much how big it is, but rather what you do with it, that matters.

*I’m not telling you mine, you can look it up yourself!

** 39

Posted in Science admin | Tagged | 3 Comments

Why scientists should tweet

Yesterday I attended a Twitter tutorial organised by Jenny Rohn and Uta Frith for the UCL Women group. It reminded me afresh how much I value Twitter, and also how much persuasion I had needed to join it in the first place. I thought I’d jot down a few thoughts about why I think scientists should tweet.

I first tried Twitter out 3 or 4 years ago and resigned shortly thereafter in disgust. It seemed full of the most banal trivia. I am, I thought to myself loftily, much too busy and intellectually deep for this mindless drivel clearly designed for the short-of-attention span with not much to think about. With which arrogance I thus deserved to miss out on a couple of years of useful fun.

I rejoined it last year on Jenny Rohn’s advice, and this time I had her online tutorial to guide me. The trick, I discovered, is to follow the right people. While the celebs of this world really do tweet about what they had for breakfast, it turns out that scientists tweet about, prepare to be surprised, science. Lots of it. In short, snappy chunks that can be grasped in a single saccade.

Furthermore, the insight that I hadn’t made the first time around, Twitter (useful Twitter) is not so much the comments as the links. It turns out that Twitter for the busy scientist is a vast, seething ocean of pointers – links to this paper or that paper or this blog or that review or this commentary or that scathing take-down or… it’s really a vast roadmap of where the cool stuff is happening right now. The immediacy is the best thing about it – I find out about new findings far faster on Twitter than I do through the usual routes, and I also find out what people think about what’s happening – and by “people” I mean “experts”.

Twitter, like science, is about information, and information is power. Out there in Twitter world, the young and cyber-savvy are busy exchanging knowledge and ideas at an incredible rate and any scientist who doesn’t join in is, frankly, missing out. When I got angry at a paper I was reviewing one day, I tweeted this:

tweet

This generated a flurry of sympathetic conversation, I responded with a blogpost that set out my frustration in more detail, and within three days I had been contacted by several journal editors saying they were responding to our concerns and taking them to higher mangement discussions. One editor, of a new online journal, came to my office to collect my views in person. Such is the power of Twitter, to spread information and facilitate change.

So I urge all you recalcitrant scientists who are too busy for this mindless drivel – think again, and do give it a try. And don’t just “lurk” (read but say nothing) – tweet. Be part of the knowledge marketplace. Tell us about your work. Read and comment on those papers so we don’t all have to! Tell us what you think. And every now and then, if you have something *really* great for breakfast, tell us that too…

Posted in Uncategorized | 6 Comments

Letter to Editor

Dear Journal Editor,

Thank you for the invitation to review a paper for your publication. I would be happy to do this. I have just one condition.

I love reviewing, I really do. I love the feeling of being among the first to see new results, and I relish the opportunity to help improve and refine a paper so that it makes the best contribution possible. I also think I’m quite good at it. I have amassed 30-odd years of scientific experience and I have an instinct for results that are real, findings that are really (as opposed to statistically) significant, hypotheses that have been validly tested, and conclusions that follow logically and plausibly from the findings. I have a nose for good writing and a distaste for woffle. I think I add value to a paper, and thus to your journal, by suggesting clarity, or new analyses, or even (in some cases) new experiments and computational models. I take my time reviewing and I do it carefully.

Here’s the thing, though. I am really, really busy. Eye-bleedingly busy – running a lab, a department, and (I almost forgot) a family. I read the papers that I’m reviewing in snatched moments between other tasks – on trains, planes and automobiles, in cafes and pubs, in bed when the kids are asleep, in meetings. I almost always read them electronically and on the move, not sitting in my office with my feet propped on my desk, smoking a pipe and idly browsing a pile of paper pages. This is not 1950.

So here’s my condition. I will review your paper for you if you do one thing for me, and that is NUMBER THE FIGURES AND PUT THE LEGENDS BESIDE THEM, SO THAT I DON’T HAVE TO KEEP SCROLLING BACK AND FORTH THROUGH THE PDF!!!

We have had electronic manuscripts for 20 years and more, and yet your manuscripts are still formatted as they were in 1950. You have done many things to make publishing easier but nothing, as far as I can see, to make reviewing easier.  It will only take you 5 minutes to organise the pdf to be read comfortably on a screen, and will save me much time and hassle, for which I will reward you, I promise, with a thoughtful and considered review drawn from the deepest well of my insights and years of experience.

Thank you.

With best wishes,

Kate Jeffery

PS It would be really really great if you could also actually put the figures in the text where they belong – but baby steps…

Posted in Uncategorized | 1 Comment

Assessing scientists – a crowdsourcing approach?

As the REF draws closer, there are ongoing debates about how we assess scientists and their merits. Such assessments are critical for hiring and promotion  and we are all in agreement that the current system, by which scientists are assessed by the impact factor of the journals they publish in, is not suitable, for reasons outlined by Stephen Curry and picked up again recently by Athene Donald.

Since I’ve been involved quite a bit recently in promotions and hiring, I’ve thought about this a lot. The problem is that we are trying to measure something intangible, and impact factor is the only number we have available – so naturally we use it, even though we all know it’s rubbish.

OK so here’s an alternative idea. It’s totally off the top of my head and thus doubtless deeply flawed but any ideas are better than none, right? And the impact factor discussion, at least in the last few months, has been remarkably devoid of new ideas.

The reason we don’t like impact factors is that we feel they are missing the essence of what makes a good scientist. We all know people who we think are excellent scientists but who, for whatever reason, don’t publish in high impact journals, or don’t publish frequently, or aren’t very highly cited, or whatever. But we trust our judgement because we believe (rightly or wrongly) that our complex minds can assimilate far more information, including factors like how careful is the person in their experiments, how novel are their ideas, how elegant are their experimental designs, how technically challenging their methods etc etc etc – none of which is reflected in the no. or impact factor of their publications.

My idea is that we crowdsource this wisdom and engage is a mass rating exercise where scientists simply judge each other based on their own subjective assessment of merit. Let’s say every two years, or every five years, every scientist in the land is sent the name of (say) 30 scientists in a related area, and are simply asked to rank them, in two lists: (1) How good a scientist do you think this person is, and (2) How important is the contribution you think they have made to the field to date. Each scientist is thus ranked 30 times by their peers, and they get a score equal to the sum or average of those 30 ranks. A scientist who made the top of all 30 lists would be truly outstanding, and one who was at the bottom of all 30 probably unemployable. Then institutions could choose to decide where to draw the boundaries for hiring, promotion etc.

This would all be done in confidence, of course. And a scientist’s own rank wouldn’t be released until they had submitted their ranking of the others. It would be relatively low cost in terms of time, and because specialists would be ranking people in a related area, they would be better placed to make judgements than, say, a hiring committee. In a way it is like turning every scientist into a mini REF panel.

Comments on this idea, or indeed better ideas, welcome.

Posted in Uncategorized | 3 Comments