Recently, the newly crowned Nobel laureate Randy Schekman caused a controversy by advising scientists to boycott what he called the “luxury journals” Science, Nature and Cell, which he claims create distorting incentives in the field. He said “Just as Wall Street needs to break the hold of the bonus culture, which drives risk-taking that is rational for individuals but damaging to the financial system, so science must break the tyranny of the luxury journals.” His argument is that pressure to publish in this small number of highly selective journals creates fashionable “bubbles” of research and also sometimes causes scientists to cut corners, leading to a higher retraction rate for these journals. He said “I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.”
This comment has had a predictable well-he-can-say-that-now reaction from commentators, one of whom outright called him a hypocrite, who have not been slow to observe that Schekman himself has published dozens of papers in those same luxury journals. It might also be argued that as editor of a rival journal, the Wellcome Trust’s highly selective (some might say elitist, perhaps even “luxury”) open access journal eLife, using a high-profile Nobel platform to disparage competitors is somewhat – well, questionable. The editor of Nature issued a defence of their editorial policy, saying they are driven by scientific significance and not impact or media coverage, while Science and Cell have so far maintained a dignified silence.
Undeterred by the controversy, Schekman subsequently issued an email invitation to scientists: “How do you think scientific journals should help advance science and careers?” Leaving aside questions of moral high ground with regard to the context of this discussion, he has a good point. The question of how journals influence the process of science is a valid one, and it seems that since this is now in the open, and receiving high profile attention, we should give it some thought.
So, folks, what do we think, as a collective? Are the Big Three really damaging science? Will boycotting them solve any problems?
Myself, I think not. Journals are business enterprises, they do what they do, and while they perhaps create incentives for us, it’s even more the case that we create incentives for them. We have decided, as a profession, to bow down to the Impact Factor god, and treat it as a proxy for scientific quality. Thus, quality journals who aim to market the best research will try and accumulate impact factor, and the most successful journals will get the best IF. Of course they will, and you would too if you were running a journal business. eLife does too, despite its lofty claims to be above all that. So does the UK’s national science rating enterprise the REF, despite its claims to the contrary. We all bow down to the IF god, because right now there is nothing else.
This is a problem, because overweighting of a single parameter is bad for complexity in a system. If you were trying to create a forest and selected only the fastest growing trees, you would pretty soon end up with only pine. Science right now resembles a pine forest, academically speaking – it is overwhelmingly white, male, western and reductionist. Many of the planet’s best brains are locked out of science because we have failed to create an ecosystem in which they can flourish. If we want science to have the intellectual biodiversity of a rainforest we need to judge it by more parameters than just that one single one.
OK so we need more parameters. Like what? Well here’s one more, and it offers something I think journals – to go back to Schekman’s email question – could usefully do. That is, to publish, alongside all the other details of the research, the one possibly most important detail – how much did that research cost the funder?
Currently, we assess only the output from a lab, without taking account of the input. We measure a scientist by how many S/N/C papers they publish, not by how many they published per taxpayer’s dollar (or pound, or whatever). I don’t know why we don’t consider this – possibly from the misguided belief that science should be above all that – but science is a closed-system, zero-sum game and there is only so much money available to fund it. The way things are at present, if a young scientist is successful early on in gaining a large amount of money – let’s say they win a prestigious fellowship – then they produce a large output (of course, because they had a lot of money) and are then confirmed as being a good scientist. Then people give them more money, and on it goes, and their output accelerates. If someone has less money – let’s say they got off to a slow start due to experimental misfortune, or are from a developing country, or they spent time having babies and accrued fewer grants meanwhile – then their outputs are fewer and lower impact (it takes time and money to produce a Big Three after all) and funding, and therefore output, slowly tails off.
If we rated scientists not just by raw output but by efficiency then we would get a much different picture. Now, not only large well-funded labs would look good. A small lab that was producing relatively few papers but spending hardly any money in doing so would look as good as a large, well-funded lab that – while it produced more papers – spent a lot more money in the process. If we rewarded efficiency rather than raw output we would remove the incentive to grow a group larger than the scientist felt comfortable managing. It would encourage thrift and careful use of taxpayer-funded resources. It would put low-budget low-impact science such as psychophysics on the same esteem footing as high-budget high-impact science like fMRI. The science ecosystem would start to diversify – we would become more than just pine trees. We would, I believe, ultimately get more science from our limited planetary resources.
So, Dr Schekman, that’s my answer to what journals can do – publish not just the citations but also the efficiency of their articles. Let’s not boycott anything – let’s just change our assessment of what we think is the important measure here.