The Disaggregation of the Journal Model

Historically, a journal has performed several functions such as 'archiving', 'registration', 'dissemination', and 'certification' (there are others, but these four have historically defined the journal). However it is increasingly evident that these functions do not need to all take place within a single journal 'container'—instead each function can be broken off and handled by separate entities.

Several interesting companies are arising to take advantage of this way of thinking. Although each service can be thought of as an attempt to 'disaggregate' the journal, they are also being used by existing journals to enhance and extend their features and functionality. As with preprint servers, none of these products are really thought of as a journal, but it is certainly useful to be aware of them when considering the future development of the journal model:

Rubriq is a company which attempts to provide ''3rd party peer review''—

authors submit to Rubriq who then solicit (and pay) appropriate peer reviewers to provide structures reports back to the author. Another example of this kind of thinking is with the Peerage of Science (Fig. 3). By using services like this, authors can either improve their articles before then submitting to a journal; or they can attempt to provide their solicited peer reviews to the journal of their choice in the hope that this will shorten their review process.[1]

'Alt-Metrics',[2] or 'Article Level Metrics' (ALM) are tools which aim to provide 3rd party 'crowdsourced' evaluation of published articles (or other scholarly content). If we envision a future state of the publishing industry where most content is Open Access, and a large proportion of it has been published by a megajournal, then it is clear that there are considerable opportunities for services which can direct readers to the most relevant, impactful, or valuable content for their specific needs. Pioneered by the likes of PLOS[3] (who made their ALM program entirely open source), the field is being pushed forward by newly started companies. The major players in this space are Impact Story (previously Total Impact), Altmetric and Plum Analytics and all of them attempt to collate a variety of alt-metrics from many different sources, and present them back again at the level of the article (Fig. 4).

One particularly interesting provider of 'alt metrics' is F1000Prime[4] (from the ''Faculty of 1,000''). Previously known simply as 'Faculty of 1000', F1000Prime

Fig. 3 The peerage of science homepage

Fig. 4 The impact story homepage, highlighting which sources they index and how they present metrics for a single article

makes use of a very large board of expert academics who are asked to evaluate and rate published articles regardless of the publisher. In this way, they form an independent review board which attempts to direct readers towards the articles of most significance. Although F1000Prime only evaluates perhaps 2 % of the literature (and hence is not comprehensive), it is clear that an approach like this fits well with a concept of disaggregating the role of 'content evaluation' (or 'content filtering') from the journal process.

Conclusion

As can be seen from this chapter, there is a great deal of experimentation happening in the journal space today. It remains to be seen which experiments will succeed and which will fail, but it is certainly the case that the last decade has seen more experimentation around the concept of the journal than its entire 350 year history prior to this point. We live in interesting times.

  • [1] Disclosure—the author of this chapter is on the (unpaid) Advisory Panel of Rubriq
  • [2] Altmetrics Manifesto: altmetrics.org/manifesto/
  • [3] PLOS Article-Level Metrics: article-level-metrics.plos.org/
  • [4] F1000Prime: f1000.com/prime
 
< Prev   CONTENTS   Next >