The Competition for Publications in Academic Journals: The Peer-Review Process
In almost every academic discipline, publications are the most important and often the only measurable output. Indeed, in some natural sciences and in engineering inventions or patents also play a certain role, yet this more concerns applied science. Basic research, however, always manifests itself in publications. What is more obvious than measuring a scientist or institute's output or productivity on the basis of publications? For is it not the case that many publications are the result of a lot of research, consequently increasing our relevant knowledge? Should not every scientist be driven to publish as much as possible in order to achieve maximum ''scientific productivity''? Someone who has just a little knowledge of universities and academic life can immediately answers these questions with an overwhelming ''no''. Indeed, more publications increase the amount of printed sheets of paper, but this number does not say any more about the significance of a scientist or institute's research activity than the number of notes played says about the quality of a piece of music.
Of course, measurements of scientific output are not as primitive as counting every written page of scientific content as scientific activity. Relevant publications are in professional journals, where submitted work is subjected to a ''rigorous'' and ''objective'' selection method: the so-called ''peer-review process''. This should ensure that only ''qualitatively superior'' work is published, which then is regarded as a ''real scientific publication''. Thus, strictly speaking, the aim of the artificially staged competitions amongst scientists is to publish as many articles as possible in peer-reviewed scientific journals.
However, among scientific journals strict hierarchies also exist which are supposed to represent the average ''quality'' of the accepted papers. In almost every scientific discipline there are a few awe-inspiring top-journals (A-journals), and then there are various groups of less highly respected journals (Band Cjournals), where it is easier to place an article, but where the publication does not have the same significance as an A-journal article. Publishing one's work in an Ajournal is therefore the most important and often also the only aim of modern scientists, thus allowing them to ascend to the ''Champions' League'' of their discipline. Belonging to this illustrious club makes it easier to publish further articles in A-journals, to secure more research funds, to conduct even more expensive experiments, and, therefore, to become even more excellent. The ''Taste for Science'', described by Merton (1973), which is based on intrinsic motivation and supposed to guide scientists was replaced by the extrinsically motivated ''Taste for Publications.''
But what is actually meant by the peer-review process? When a scientist wants to publish a paper in an accepted scientific journal, the paper has to be submitted to the journal's editors, who have established themselves as champions within their disciplines. These editors usually do not have the time to deal with the day-to-day business of ''their journal'' and thus there is a less accomplished Managing Editor, who is responsible for administrative tasks and receives manuscripts from the publishing-hungry scientists and then puts the peer-review process in motion. The Managing Editor gives the submitted manuscripts to one or several professors or other distinguished scientists (the so-called peers) who ideally work in the same field as the author and therefore should be able to assess the work's quality.
To ensure the ''objectivity'' of the expert judgments, the assessment is usually performed as a double-blind procedure. This means that the reviewers do not know who are the authors of the article to be reviewed, and the authors are not told by whom their paper is assessed. At the end of the peer review process, the reviewers inform the editor in writing whether they plead for acceptance (very rare), revision, or rejection (most common) of the article submitted to the journal in question. Quite a few top journals pride themselves on high rejection rates, supposedly reflecting the high quality of these journals (Fröhlich 2007). For such journals the rejection rates amount to approximately 95 %, which encourages the reviewers to reject manuscripts in almost all cases in order to defend this important ''quality measure''. Solely manuscripts that find favor with their reviewers get published, because although the final decision concerning publication rests with the editors, they generally follow the expert recommendations.
The peer-review process is thus a kind of insider procedure (also known as clan control, Ouchi 1980), which is not transparent for scientists outside the established circle of champions. The already-established scientists of a discipline evaluate each other, especially newcomers, and decide what is worthy to be published. Although the claim is made that scientific publications ultimately serve the general public, and thereby also serve people who are not active in research, the general public, who is actually supposed to stand behind the demand for scientific achievement, has no influence upon the publication process. The peers decide on behalf of the rest of mankind, since the public can hardly assess the scientific quality of a work.[1] Outside of the academic system, most people neither know what modern research is about, nor how to interpret the results and their potential importance to mankind. Although scientists often also do not know the latter, they are—in contrast to the layman—educated to conceal this lack of knowledge behind important sounding scientific jargon and formal models. In this way, even banalities and absurdities can be represented as A-journal worthy scientific excellence, a process laymen and politicians alike are not aware of. They are kept in the blissful belief that more competition in scientific publication leads to everincreasing top performance and excellence.
Considering the development of the number of scientific publications, it seems that scientists are actually accomplishing more and more. Worldwide, the number of scientific articles, according to a count conducted by the Centre for Science and Technology Studies at the University of Leiden (SBF 2007) has increased enormously. The number of scientific publications in professional journals worldwide increased from approximately 686,000 in 1990 to about 1,260,000 in 2006, which corresponds to an increase of 84 %. The annual growth rate calculated on this basis was more than 5 %. The number of scientific publications grows faster than the global economy and significantly faster than the production of goods and services in industrial countries, from where the largest number of publications originates (OECD 2008).
By far the largest share of world production of scientific articles comes from the
U.S. (25 %), followed by Britain with 6.9 %. Germany produces 6.3 %, Switzerland 1.5 %, and Austria 0.7 % (SBF 2007). However, calculating published articles per capita, Switzerland becomes the world's leading country, because there are 2.5 published scientific articles per 1,000 inhabitants, while in the U.S. there are 1.2 articles, and only one article in Germany (SBF 2007).[2] The same picture emerges if one applies the number of publications to the number of researchers. In this case, in Switzerland for each 1,000 researchers there are 725 publications while there are 295 in Germany and 240 in the United States. Thus, in no other country in the world are more research publications squeezed out of the average researcher than in Switzerland.
Once we begin to examine the background of this increasing flood of publications it quickly loses its appeal. This is to a large extent inherent in the peerreview process itself. This supposedly objective system for assessing the quality of articles in reality rather resembles a random process for many authors (Osterloh and Frey 2008). A critical investigation reveals a number of facts that fundamentally question the peer-review process as a quality assurance instrument (cf. Atkinson 2001; Osterloh and Frey 2008; Starbuck 2006). It generally appears that expert judgments are highly subjective, since the consensus of several expert judgments is usually low. One reason is that by no means do all peers, who are mostly preoccupied with their own publications, actually read, let alone understand, the articles to be evaluated. Time is far too short for this and usually it is not even worth it because there are much more interesting things to do. Hence, time after time reviewers pass on the articles to their assistants who, in the manner of their boss, draft the actual review as ghostwriters (Frey et al. 2009). No wonder that under such conditions important scientific contributions at hindsight are frequently rejected. Top journals repeatedly rejected articles that later on turned out to be scientific breakthroughs and even won the Nobel Prize. Conversely, however, plagiarism, fraud and deception are hardly ever discovered in the peer review process (Fröhlich 2007). In addition, unsurprisingly, reviewers assess those articles that are in accordance with their own work more favorably, and vice versa, they reject articles that contradict them (Lawrence 2003).
Due to the just-described peer-review process, the competition for publication in scientific journals results in a number of perverse incentives. To please the reviewers, a potential author undertakes everything conceivably possible. To describe this behavior Frey (2003) rightly coined the term ''academic prostitution'', which—in contrast to traditional prostitution—does not spring from natural demand, but is induced by artificially staged competition (cf. Giusta et al. 2007). In particular, the following perverse effects can be observed:
Modes of perverse behavior caused by the peer-review process:
• Strategic citing and praising[3]
When submitting an article to a journal, the peer-review process induces authors
to think about possible reviewers who have already published articles dealing with the same or similar topics. To flatter the reviewers, the author will preferably quote all of them or praise their work (as a seminal contribution, ingenious idea, etc.); An additional citation is useful for the potential reviewer because in turn it is improving his own standing as a scientist. Furthermore, editors often consult the bibliography at the end of an article while looking for possible reviewers, which makes strategic citing even more attractive.
Conversely, an author will avoid criticizing the work of possible reviewers, as this is a sure road to rejection. Accordingly, this attitude prevents the criticism and questioning of existing approaches. Instead, the replication of established knowledge gets promoted through elaboration upon preexisting approaches through further model variations or additional empirical investigations.
• No deviation from established theories
In any scientific discipline there are some eminent authorities who dominate
their field and who often at the same time are the editors of top journals. This in turn allows them to prevent the appearance of approaches or theories that question their own research. Usually this is not difficult, since most authors try to adapt to the prevailing mainstream theories in their own interest. The majority of the authors simply wants to publish articles in top journals, and this makes them flexible in terms of content. They present traditional or fashionable approaches that evoke little protest (Osterloh and Frey 2008). In this way, some disciplines (e.g. economics) have degenerated into a kind of theology where heresy is no longer tolerated in established journals. Heresy takes place in only a few marginal journals specializing in divergent theories, but these publications rarely contribute to the reputation of a scientist. As Gerhard Fröhlich aptly writes: ''In science as in the Catholic Church similar conditions prevail: censorship, opportunism and adaptation to the mainstream of research. As a result, a highly stylized technocratic ratingand hierarchy-system develops, which hinders real scientific progress.''
In empirical studies, the adherence to established theories can also be discovered by the results of statistical tests. To falsify an existing theory is linked to low chances of publication and thus there is an incentive to only publish successful tests and to conceal negative results (Osterloh and Frey 2008).
• Form is more important than content
Since presenting original content usually lowers the chances of publication,
novelty has shifted to the form how content is presented. Simple ideas are blown up into highly complex formal models which demonstrate the technical and mathematical expertise of the authors and signal importance to the reader. In many cases, the reviewers are not able to evaluate these models because they have neither the time nor the inclination to deal with these models over several days. Since they cannot admit this, in a case of doubt formal brilliance is assessed positively because it usually supports prevailing theories. It helps to immunize the prevailing theories against criticism from outside, and all colleagues who are not working within the same research field just need to believe what was ''proven to be right'' in the existing model or experiment.
With this formalization, sciences increasingly move away from reality as false precision is more important than actual relevance. The biologist Körner writes (2007, p. 171): The more precise the statement [of a model], the less it usually reflects the scale of the real conditions which are of interest to or available for the general public and which leads to scientific progress.
The predominance of form over content (let us call this 'crowding-out' of form by content) does also attract other people to science. The old type of an often highly unconventional scientist who is motivated by intrinsic motivation is increasingly being replaced by formally gifted, streamlined men and women,[4] who in spite of their formal brilliance have hardly anything important to say.
• Undermining of anonymity by expert networks
In theory, the peer-review process should work in such a way that publication
opportunities are the same for all authors. Both the anonymity of the authors and the reviewers are guaranteed thanks to the double-blind principle. For many established scientists at top universities, ''real'' competition under these conditions would be a nuisance. After all, why did one work hard for a lifetime only to be subject to the same conditions as any newcomer? The critical debate on the peer-reviewed process discussed in the journal Nature in 2007, however, clearly showed that in practice the anonymity of the process for established scientists is rare. They know each other and know in advance which papers by colleagues or by scientists associated with them will be submitted. In expert networks maintained in research seminars, new papers are presented to each other, which successfully undermines the anonymity of the peer-review process.
This fact can clearly be seen when looking at the origin of scientists who publish in top journals. For example, a study of the top five journals in economics (Frey et al. 2009, p. 153) shows that of the 275 articles published in 2007, 43 % originated from scientists working at only a few top American universities (Harvard, Yale, Princeton, MIT, Chicago, Berkeley, Stanford). The professors of these universities are basically set as authors and the rest must then go through an arduous competition for the few remaining publication slots. What George Orwell noted in his book ''Animal Farm'' can be paraphrased: All authors are equal but some are more equal than others.
• Revenge of frustrated experts
Ultimately, the entire publication process is a tedious and humiliating experi-
ence for many researchers. Constantly, submitted papers are rejected, and often for reasons that are not comprehensible. One has to be pleased if the reviewers have the grace to make recommendations for a revision of the article. In this case, in order to finally get it published, one needs to (or in fact ''must'') change the article according to the wishes of the reviewers. This is hardly a pleasant task as it is not uncommon that a revision is done ''contre coeur.'' Therefore it is no wonder that many reviewers are at the same time frustrated authors, who can now pay back to innocent third authors the humiliation they had gone through themselves (Frey et al. 2009, p. 153): They should not have it easier than us, and they should not think that getting a publication is so easy. is the tenor. For this reason, articles are often rejected out of personal grudges, and the supposedly objective competition for publication becomes a subjective statement. This is particularly the case when it comes to approaches that are hated by the reviewers (in reality it is often the professor behind the publication who is hated) and they will not forgo the chance to make the life of this author a little bit more miserable.
The perverse incentives created by the peer-review process ensure that the steadily increasing number of published articles in scientific journals often does not lead to new or original insights and, therefore, many new ideas do not show up in established journals. They can rather be found in books and working papers, where there is no pseudo-quality control which hinders innovative ideas. Although the peer-review process prevents the publication of obvious platitudes and nonsense on the one hand, on the other hand it promotes the publication of formally and verbally dressed-up nonsense. The increasing irrelevance of content is the result of artificially staged competition for publication in professional journals. The next section deals with the use of publications and citations as indicators for the assessment of individual scientists and scientific institutions, and explains why we have more and more irrelevant publications.
- [1] In the language of economics, this means that the information asymmetry between scientists and lay people is so large that ''monitoring'' by outsiders is no longer possible (Partha and David 1994, p. 505)
- [2] Nevertheless, the Neue Zürcher Zeitung already worried in an article from 2004 that the growth of publications in Switzerland compared to the average of OECD countries was below average. This thinking reveals once again a naive ton ideology, in which more scientific output is equated with more well-being
- [3] In the meantime, there are now so-called guides along the lines of ''How to publish successfully?'', which provide strategic advice to young scientists in the manner described herein
- [4] Just look at today's photos of highly praised young talents in sciences. In this case, images often say more than 1,000 words