Global rankings attract global attention

Despite widespread criticisms of the scientific and cultural bases of most ranking systems, it cannot be denied that they continue to attract considerable and growing attention from institutions, academics, employers, students and their parents. However, a review of the literature reveals relatively few comparative studies of international academic ranking systems (e.g., Aguillo, et al., 2010; Buela-Casal, Gutierrez-Martinez, Bermudez-Sanchez, & Vadillo-Munoz, 2007; Dill & Soo, 2005; Provan & Abercromby, 2000; Usher & Savino, 2006; van Dyke, 2005; Loocke, 2019). Some commentators point out that many universities actively participate in global and regional rankings in order to benefit from hopefully positive publicity and attract students (Provan & Abercromby, 2000), whilst others suggest that rankings exert a negative impact on university performance (Dill & Soo, 2005). Others participate with reluctance arguing that they have little or no choice. The critiques from other academics concentrate on the selection of indicators, the assignment of weights and the statistical insignificance of differences between institutions (Dill & Soo, 2005; Provan & Abercromby, 2000). Buela-Casal et al. (2007) compared four international academic rankings of universities (Shanghai ranking. Times supplement ranking, CEST Scientometrics Ranking and Asiaweek) whilst Aguillo et al. (2010) compared the Shanghai ranking, Times Higher ranking, Taiwan ranking, Leiden ranking and Webometrics ranking by using a set of similar measures. Perhaps not too surprisingly, these studies generally find that there are reasonable similarities between the outcomes reported in the rankings, even though each applies a different methodology', which is often based primarily focused on research and academic reputation. Despite these academic identified similarities, different ranking systems are driven by different purposes, target different audiences, assess different parameters and depend upon the availability of relatively common data (van Dyke, 2005) across different geographical locations and cultures (Usher & Savino, 2006).

Similarities between the Big Three

Earlier chapters have clearly demonstrated the dominance of three global university ranking systems in terms of world and regional higher education ranking. The oldest system, by one year, is the Academic Ranking of World Universities (ARWU) prepared by the Shanghai Jiao Tong University (SJTU) which was first published in 2003, followed by the World University Rankings by Quacquarelli Symonds (QS) first being published in 2004 with Times Higher Education as media partners. In 2010, following a carefully orchestrated media fanfare, the Times Higher Education Supplement published its own set of World University Rankings called the Times Higher Education Survey (THES). Subsequently the

ARWU, QS and THE have established themselves as the three global rankings of significance. Detailed information about their various criteria, weightings and scoring systems is available from earlier chapters and the respective websites of these ranking systems so will not be repeated here. However, it is worth sharing some historical and more recent information about the Big Three here. The Academic Ranking of World Universities (ARWU) was first published in June 2003 by the Center for World-Class Universities and the Institute of Higher Education at the Shanghai Jiao Tong University in China, and then updated on an annual basis. Starting from 2009, the ARWU has been published by the Shanghai Ranking Consultancy, a fully independent organisation (www.arwu. org/). The Quacquarelli Symonds World University Rankings (QS-WUR) was initially published by the Times Higher Education Supplement in 2004 (, but has operated very successfully on a wholly independent basis since 2010. Finally, the newest system. The Times Higher Education World University Rankings THE-WUR has been published by the Times Higher from 2010 ( Whilst some might try to hide it, it is a fact that all three HERS are produced by highly successful commercial organisations, with none being able to claim the moral high ground in terms of this aspect of their operations.

Differences between the Big Three

There are substantial differences between each of the Big Three university league tables in terms of what they measure, how they measure it and in their implicit definition of ‘quality'’ (Usher & Savino, 2006). Every one of these three HERS has successfully captured global media attention through a variety' of means, whilst also attracting fierce criticism and, in some cases, praise. Unlike the criticism which also comes from the global academic community', praise does not usually come from their fellow ranking organisations. The ARWU is often criticised for being too mono-dimensional in the modern higher education landscape being clearly based on solely research related criteria. For example, using tools and concepts from Multiple Criteria Decision Making, Billaut et al. (2010) concluded that ARWU did not qualify as a useful and pertinent tool in terms of discussing the ‘quality'’ of academic institutions, let alone as something which could be used to guide the choice of students and their families, or to promote improvements in higher education systems. Furthermore, Dehon et al. (2010) employed robust principal component analysis to uncover two different and uncorrelated aspects of academic research: overall research output and top-notch researchers, and they' concluded that the relative weight placed upon these two factors determined to a large extent the final ranking an institution received in the ARWU. The QS-WUR gives great weight to academic peer review, which inevitably results in the criticism of having a strong regional bias by' virtue of the fact that its peer review assessors are likely' to rate universities in their own region rather than across the world (Taylor & Braddock, 2007). Immediately after the publication of THE-WUR in 2010, it attracted robust global questioning related to anomalies, missing

Higher education ranking systems 125 institutions, transparency and validity. The then President at Nanyang Technical University in Singapore, Su Guaning commented about the new THE system:

A detailed analysis reveals it is 88% computed from research-related indicators with unusual normalisation of data producing some bizarre results.... For rapidly rising universities, the results of recent world-class research work are not immediately captured in rankings such as Times Higher Education.

(Taylor & Braddock, 2007)

Taylor and Braddock (2007) concluded in their study that ARWU was a better indicator of university excellence than QS-WUR, while Downing (2010a) thought the QS-WUR allowed more opportunities for ambitious young universities to be noticed in the world rankings than ARWU. Marginson (2007b) critiqued the above two ranking systems and canvassed the methodological difficulties and problems by advocating a better approach to rank universities which he termed ‘■clean’ rankings, transparent, free of self-interest, and methodologically coherent (Marginson & van der Wende, 2007). As international league tables like the QS-WUR and ARWU compete for dominance (IHEP, 2007; Usher & Savino, 2006), there is ongoing debate related to criteria for inclusion (the role of medical centres and hospitals), weightings (a priori models), variable interdependencies (correlation among bibliometric measures) and size components (classification of universities) amongst other topics. Regardless of the individual methods employed by each HERS, most ranking systems share common limitations:

The main problems are that most rankings purport to evaluate universities as a whole, negating any internal differentiation, and that the weightings used to construct composite indices covering various aspects of quality or performance may be arbitrary and biased in favor of research, while providing little (or no) guidance on the quality of teaching. Research performance measures tend to be biased towards the natural and medical sciences and the English language, enhancing the statue of comprehensive research universities in major English-speaking countries.

(van der Wende & Westerheijden, 2009, p. 72)

< Prev   CONTENTS   Source   Next >