It is a very Darwinian concept: publish or perish. But that is the maxim that governs the world of public scientific research these days. In the scientific community, the main way in which scientists gain recognition, whether among colleagues or their bosses, is through the number of their papers that are published in quality scientific journals.
Indeed, this trend has reached the point where the main aim of a scientist is often not to write a paper that is faithful to their work but rather one that gets around the likely objections that will be made by the reviewers or 'referees', usually anonymous, to whom the journal will send a paper that is submitted to it. This is the process known as 'peer review'. It is a major issue facing science: what can be done to combat this overwhelming pressure to publish, a trend that is generally seen as one of the most powerful factors in the reduction of the quality of science?

A radical attempt to tackle the issue was made on May 13th 2013 with the San Francisco Declaration on Research Assessment (DORA). This proposes that scientists and scientific institutions should no longer be evaluated according to what is known as the 'Journal Impact Factor'. This 'impact factor' is a ratio based on the number of times that papers published in journals by a particular scientist have been cited over the previous two years, divided by the number of papers that they have had published in that period (see detailed explanation here). This may seem like a very arcane point, but this 'impact factor' is, crucially, widely used to evaluate whether scientists should get hired or receive research grants. And behind this technical initiative lies a broader attempt by members of the international scientific community to tackle the problem of the reduction in quality of papers in scientific journals. This is particularly a concern in life sciences; the main driver behind the DORA initiative is the American Society for Cell Biology (ASCB).
The list of those people and organisations that have signed this unprecedented declaration is long; around 6,000 scientists from around the world and publishers of highly prestigious scientific journals (Science, Journal of Cell Biology, The EMBO Journal, Development, to name just a few) have lent their names to it, plus a host of scholarly institutions. All call for what one author of the appeal calls an “insurrection” against the use of the 'impact factor' of journals in the evaluation of scientists.
Criticism of the impact factor is nothing new, and the reasons why it may not be a good indicator of the quality of a scientist's work have been known for a long time. Here are five of them:
- The 'impact factor' measures the impact of a journal, not a paper.
- It measures an average which can easily be distorted by one or two very frequently cited papers; so, for example, the publication of the historic study describing the human genome project in Nature in 2000, which has been cited more than 10,000 times, gave and continues to give the highest of impact factors to this British journal.
- It is incapable of measuring the lasting influence of the works published, as it is based on the previous two years.
- It does not take account of whether the studies have been cited favourably or their data and conclusions have been criticised.
- Finally, it can easily be manipulated by publishers.
On this last point Hervé Maisonneuve, a doctor who runs the blog Rédaction médicale et scientifique aimed at health professionals who want to keep up with the latest news from biomedical journals such as The Lancet, the BMJ and understand how they operate, says that publishing so-called “hot papers” is a strategy aimed at boosting the number of citations. “This means a preference for publishing articles that have a high probability of being cited and/or being picked up by the general press, thus increasing the journal's reputation,” he says.
It should be recalled that the Journal Impact Factor was originally conceived as a tool for librarians to ensure they only subscribed to the most influential journals. But over the last two decades it has become the single most important criterion for evaluating scientists. “Publication in a high-impact journal is absolutely no guarantee of the quality of the research carried out, especially as the selection of papers is not just as a result of peer review but is often the choice of the editor, dictated by current fashions,” says physicist Michèle Leduc, president of the ethics committee at the French national research centre the Centre national de la recherche scientifique (CNRS). “However, getting published just once or twice in these star journals is always noted in an evaluation and invariably gives a boost to one's career,” she adds.
Her counterpart at the Institut Pasteur, François Rougeon, makes a similar point. “You just have to publish a paper in Nature to be certain of having all you need for the next four or five years,” he says.
The advent of 'slow science'?
In a certain number of emerging countries, and in particular China, the pay of a research scientist can even be directly indexed to the impact factor of the journals in which he or she is published. This is a fixed and paid incitement to organise the data to ensure it will be published in the best-known journals. Or, in some cases, to plagiarise already published articles.
A US study published last year in Proceedings of the National Academy of Science showed the extent to which published articles in scientific journals have been retracted for one reason or another. It found that just 21% of the 2,047 papers withdrawn since 1973 have been retracted because the author or authors themselves had noticed an error that they did not want other researchers to rely on. In contrast, 43% of them were withdrawn for either admitted or presumed fraud, 14% because of duplication – the results of an experiment can generally only be published once – 9% for plagiarism and the rest because of conflicts between authors. The study also shows that the main geographical areas from which scientists came who retracted papers because of duplication or plagiarism were China and India.
So how is it that the world of science is so in thrall to the 'impact factor' if everyone knows that it is flawed and that it has helped encourage what some term 'junk science'? “It's part of a more general trend to rely, in evaluations, on indicators with figures, because they are deemed to be objective,” says the Institut Pasteur's François Rougeon.
The marketing policy of large scientific publishers has played a role in this. This extremely profitable industry, which produces journals under the academic supervision of scientists on editorial committees, has not held back from using impact factor as a selling point in promoting its publications. It is no coincidence that the companies who publish the most high-impact journals, such as the British publisher Macmillan which publishes Nature or the Dutch publishers Elsevier which produces Cell, declined to sign the DORA declaration, unlike the scholarly institutions that publish their own journals. For these latter organizations, highlighting their own impact factors – which are far from being the highest – is also one way to distinguish themselves from larger commercial operations.
But the way that scientific research has evolved, notably the growing competition in that world, has also played a role in increasing the importance of the notion of 'publish or peril'. The bodies that award grants, as well as the organizations in charge of recruitment, often hand over the applications they receive for evaluation by scientists who are also overwhelmed by other work. These evaluators often do not have the time to properly study the scientific qualities of a candidate or a research project, so they frequently give way to the temptation of relying on the impact factor of the publications in which the applicant has been published. In private, many scientists accept that this happens.
However will, as DORA proposes, ending the dictatorship of the impact factor be enough on its own to ensure an improvement in the quality of scientific articles? Almost certainly not, given that there are various deep-seated problems for this lack of quality, linked to an increase in competition in the scientific community and globalisation. But many observers believe it would be a first step towards evaluations that are more about quality rather than quantity, that appreciate the lasting influence of a piece of research, and which recognise that being able to replicate a study's results is, for the scientific community, every bit as important as publishing it for the first time, as it is this that gives it its credibility.

Enlargement : Illustration 2

There have been recent initiatives seeking to encourage the replication of results by allowing researchers to publish their raw, unanalysed data. As these results are available to everyone, colleagues and competitors alike, it means they can be scrutinised and checked more effectively. This is the approach being taken by projects such as Dryad and Runmycode, which allow a scientist to put on the site all their data and methods of analysis used in a paper, which in turn permits others else to recreate and thus verify their calculations or analysis. For their part, several journal publishers have got together to launch the 'Reproducibility Initiative' (see logo above) which allows the all-too-often neglected publication of the replication of an experiment that has already been published.

Enlargement : Illustration 3

A question remains, though: will scientists in the spheres of biology and medicine make use of these new tools? The example of the disciplines of theoretical physics, mathematics, statistics and astronomy, which since 1991 have encouraged the depositing online of raw data at the highly-respected ArXiv website, might suggest grounds for optimism. But in these fields there are relatively few financial issues at state. In the biomedical world, on the other hand, everyone jealously guards data that, through patents and private contracts, could one become an enormous source of income. This leads to a reluctance to share information, with the multiplicity of individual interests working against the collective interest.
However, in the same way that the prevalence of 'junk food' led to the creation of the 'slow food' movement, some would argue that in the face of 'junk science' what is needed now is a 'slow science' movement, as suggested in 2010 by French anthropologist Joël Candau from the University of Nice-Sophia Antipolis in the south of France. “Fast science, just like fast food, puts quantity ahead of quality,” Candau declared. “To research, reflect, read, write and teach, it all requires time.”
-----------------------------------------------------
English version by Michael Streeter