Highly' cited researchers as an indicator of staff quality'

The ARWU utilises a list of highly cited researchers compiled by Thomson Reuters (ShanghaiRanking Consultancy', 2003). The list contains the names of the top-cited researchers in 21 sub-categories (ShanghaiRanking Consultancy', 2003). Van Raan (2005) stresses the reliance of ARWU on the choices made by Thomson Scientific, which compiles the list for a different purpose. The list is criticised as a ranking indicator because it seems to favour medicine and biology' (Billaut et al., 2010). Billaut et al., (2010) also remark that the 21 categories are not generic in size and the number of journals used in each category' varies as well as the physical size of the journals. Similar to the criticism received from (loannidis, et al., 2007; Huang, 2011) on the indicatorusing prizes to assesses the quality of staff, using the Highly Cited Researchers list may also lead to staff being recruited as an attempt to gain ranking advantage. For example, Bhattacharjee (2011) reported that more than 60 ‘highly cited’ researchers from various disciplines signed a part-time employment arrangement with a university' that offered financial incentives in exchange for adding their affiliation to the names of their researchers.

During 2014 Thompson Reuters announced a revision to the process to identify Highly Cited Researchers, to make the methodology' consistent with the Essential Science Indicators process, and to respond to community' feedback about the output of the highly-cited researcher process vetted and published in 2012 (Cram & Docampo, 2014). The revised list identifies the researchers field by field (Thomson Reuters, 2014). Thomson Reuters (2014) explain their motivation for updating their list:

Table 4.2 New highly' cited researchers list methodology

First, to focus on more contemporary' research achievement, only' articles and reviews in science and social sciences journals indexed in the Web of Science Core Collection during the 11-year period 2002-2012 were survey'ed. Second, rather than using total citations as a measure of influence or ‘impact’, only Highly Cited Papers were considered. Highly Cited Papers are defined as those that rank in the top 1% by' citations for field and year indexed in the Web of Science, which is generally but not always the year of publication. These data derive from Essential Science Indicators^ (ESI). The fields are also those employed in ESI - 21 broad fields defined by sets of journals and exceptionally, in the case of multidisciplinary journals such as Nature and Science, by a paper-by-paper assignment to a field. This percentile-based selection method removes the citation disadvantage of recently published papers relative to older ones since papers are weighed against others in the same annual cohort.

Those researchers who, within an ESI-defined field, published Highly Cited Papers were judged to be influential, so the production of multiple top 1% papers was interpreted as a mark of exceptional impact. Relatively younger researchers are more apt to emerge in such an anafysis than in one dependent on total citations over many years. To be able to recognise early' and mid-career, as well as senior researchers, was one goal for generating the new list. The determination of how many researchers to include in the list for each field was based on the population of each field, as represented by the number of author names appearing on all Highly' Cited Papers in that field, 2002-2012. The ESI fields vary' greatly in size, with Clinical Medicine being the largest and Space Science (Astronomy and Astrophysics) the smallest. The square root of the number of author names indicated how many individuals should be selected.

The first criterion for selection was that the researcher needed enough citations to his or her Highly' Cited Papers to rank in the top 1% by total citations in the ESI field in which they' were considered. Authors of Highly' Cited Papers who met the first criterion in a field were ranked by' number of such papers, and the threshold for inclusion was determined using the number derived through calculation of the square root of the population. All who published Highly' Cited Papers at the threshold level were admitted to the list, even if the final list then exceeded the number given by the square root calculation. In addition, and as concession to the somewhat arbitrary' cut-off, any researcher with one fewer Highly' Cited Paper than the threshold number was also admitted to the list if total citations to his or her Highly' Cited Papers were sufficient to rank that individual in the top 50% by' total citations of those at the threshold level or higher. The justification for this adjustment at the margin is, it seemed to work well in identifying influential researchers, in the judgment of Thomson Reuters citation analysts.

 
Source
< Prev   CONTENTS   Source   Next >