Technical issues with citation databases

The Big Three ranking systems (ARWU, THE WUR, QS WUR) make extensive use of citation databases. Citation impact is still determined more reliably through indicators that measure the proportion of articles in intensively cited journals, and thus favours those fields in which these articles are concentrated, namely medicine, natural sciences and engineering (Waltman, Medina, Kosten, Noyons, Tijssen, van Eck, van Leeuwen, van Raan, Visser, Wouters, 2011). Another criticism regarding citation impact is related to measurement time frame. A specific time duration is needed to determine and compare citations between institutions. Choosing to use too long a duration might produce results that do not reflect the institutions current state (Anowar et al., 2015). The most central technical process on which citation analysis is based is the matching of citing publications with cited publications. The ‘identification by matching’ process is done by referees (Van Raan, 2005). A considerable amount of error occurs in the citing-cited matching process sometimes leading to a loss of citations for a specific publication (Van Raan, 2005). These ‘non-matching’ citations are highly unevenly distributed in certain situations potentially causing an increase in the percentage of lost citations of up to 30% (Moed, 2002). Van Raan (2005) also points to another problem where the names of the associated organisation or university' are sometimes incorrectly' attributed to a certain publication, especially where a variation of names for an institution exists. This is a frequent problem when medical schools, graduate schools and research organisations are used instead of the university' where the research actually takes place (Van Raan, 2005).

Larger institutions have the advantage of relying on their strong citation background should change occur but Anowar et al. (2015) mention another methodological shortcoming related to citation impact. They' suggest that credit allocation has thus far not been adequately' distributed across ranking parameters. Some researchers suggest that an equally cited paper authored by several institutions should be given more credit than a similarly cited paper authored by one institution (loannidis, Patsopoulos, Kawoura, Tatsioni, Evangelou, Kouri, Con-topoulos-loannidis, Liberopoulos, 2007).

Citation databases also attempt to separate various scientific fields, but this is unavoidably imperfect. Scientists with more multidisciplinary work have more difficulty passing the highly' cited threshold in any one field. Within the same field, scientists in sub-fields with higher citation densities have an advantage (loannidis, et al., 2007). In addition, ‘review’ articles are often cited more frequently than ‘original’ research articles (Patsopoulos, Analatos, & loannidis, 2005). It is also widely' accepted that language has an impact on publication acceptance and consequently' citation. Since most citation indices are in English and are more likely to include journals published in that language (Soh, 2015), these journals are readily available in the larger academic bibliometric systems. Altbach, (2006b) points out that US scientists prefer to cite scientists from the US which may lead to an artificial boost to the ranking of US institutions (Altbach, 2006b). Van Raan (2005) suggests that professional bibliometricians should act as advisors rather than number crunchers in order to add value to the peer review process and avoid misleading use which can cause damage to universities, institutes and individual scientists. Properly designed and constructed they can be applied as a powerful support tool to peer review (Van Raan, 2005).

Criticising ranking methods

There is a body of literature highlighting the methodological problems of rankings (Goglio, 2016). One of the most common and most vociferous complaints about university rankings is their use of reputation surveys (Rauhvargers 2014; Baty, 2011). This indicator may be a mere symptom of excellence as it favours world-renowned institutions and does not represent current research performance (Baty, 2014). The response rate is relatively low (Bekhradnia, 2017) and most current reputation surveys only reinforce the existing reputation and prestige of particular universities (De Witte & Hudrlikova, 2013; Downing 2013; Bowman & Bastedo, 2010).

Marginson (2007) argues that the measures of internationalisation some ranking systems employ are a better indicator of a university’s marketing function than the international quality of its researchers. Internationalisation indicators usually incentivise quantity over quality and often simply reflects a country’s geographic position (Altbach & Hazelkorn, 2017). Additionally, universities in English-speaking countries have the advantage of being able to recruit both native and non-native English-speaking academics from around the world (Rauhvargers, 2014; Kaychen, 2013; Toutkoushian, Teichler, & Shin, 2011).

Student-staff ratios are easily manipulated by institutions (Baty, 2014), and a lack of internationally standardised definitions makes it difficult to make valid comparisons across universities and countries (Waltman, et al., 2011; loannidis, et al., 2007). Changes in the formulae used to compile student-staff ratios can result in substantial changes in rank year on year (Harvey, 2008). In addition, as an indicator of teaching quality, there is little attempt to separate out the research effort of the staff (Bekhradnia, 2017). The quality of teaching and learning and the ‘value added’ during the educational process eludes comparative measurement (Dill & Soo, 2005; Liu, et al., 2005).

 
Source
< Prev   CONTENTS   Source   Next >