Assessing the Big Three higher education ranking systems: Broad issues and ARWU detail

Introduction

The increasing marketisation of higher education, greater mobility of students and ultimately the recruitment of foreign students, has gathered pace since 2000 (Harvey, 2008). Countries derive substantial financial benefit from international students, but it is a highly competitive market and perceived status and reputation are important marketing tools (Harvey, 2008; Dill & Soo, 2005). Rankings are extensively used in university marketing campaigns (Connell & Saunders, 2012) as established assessment tools of university excellence (Taylor & Braddock, 2007). The arrival and growth of global university rankings has perhaps inevitably sparked debate about the nature and validity of the various HERS and their methodologies (Marope & Wells, 2013; Downing, 2012; Altbach, 2006a; Dill & Soo, 2005). The objections range from the philosophical to the pragmatic (Connell & Saunders, 2012).

The process of ranking institutions begins with data collection, with a second step involving the selection of the types of ranking and variables, followed by selection of indicators and weightings before executing data analysis to produce a rank ordering (Merisotis & Sadlak, 2005). Rankings have different parameters, including publication and citation counts, student-faculty ratios, percentage of international students, number of awards and achievements, number of research papers per faculty, web visibility and the number of articles published in high impact journals, to name just a few (Aguillo, Bar-Han, Levene, & Ortega, 2010). Despite the wide range of opinions and arguments about the legitimacy of rankings, the appetite for them persists, and most experts agree that they are here to stay (Hazelkorn, 2014; Downing, 2012; Connell & Saunders, 2012). Therefore, the question now seems to be less about whether universities should be compared and ranked, but precisely how this is undertaken (Marope & Wells, 2013). This chapter will begin a three-chapter critical analysis of what are widely considered the Big Three international higher education rankings, the Shanghai Academic Ranking of World Universities (ARWU), QS World University Rankings (WUR) and the THE World University Rankings (WUR). Chapter 4 concludes with a more detailed critique of the ARWU whereas Chapters 5 and 6 deal with detailed critiques of the QS WUR and THE WUR respectively.

A broad critique of ranking methodology

Different indicators and weights inevitably produce different outcomes. Holmes (2005) suggests that many of the indicators employed by the Shanghai rankings (ARWU) measure age, size and medical research whilst the THE measures institutional income, with QS providing an opportunity' for universities with local reputations to gain international visibility (Shastry, 2017). However, this is too simple a picture of all three. Scrutiny' of rankings methodologies has increased considerably' since 2009 (Baty', 2014) but some believe problems with rankings concern the practice, not the principle (Altbach, 2006a). Consequently, alongside the proliferation and influence of rankings outlined in the last chapter, has come increasingly virulent criticism of their objectives and methodologies (Kaychen, 2013; Downing 2013; Taylor & Braddock, 2007; Van Raan, 2005).

The subjective aspects of the ranking process; e.g., the list of the universities’ attributes used in the rankings, their respective weights, and the size and composition of the comparison group are criticised regularly (Bougnol & Dula, 2015). The seemingly arbitrary' manner ranking systems assign weights to ranking indicators sometimes without sound theoretical motivation is also often criticised (Harvey, 2008; Savino & Usher, 2006). The fact that the arts, humanities and, to a large extent the social sciences, remain underrepresented in rankings is often blamed on unreliable bibliometric data (Hazelkorn, 2013). Anowar, Helal, Afroj, Sultana, Sarker, & Mamun (2015) suggest that larger institutions have an advantage in rankings because they may have more papers, citations, awardwinning scientists, students, web links and funding. Some rankings suffer from focusing only on the research dimension, which is more visible and easier to measure using external (rather than institution submitted) observations (Daraio, Bonaccorsi, & Simar, 2014). Moreover, Bekhradnia, the president of the Higher Education Policy Institute (HEPI), suggests that international rankings are onedimensional because they only measure research activity' to the exclusion of everything almost else (O’Malley, 2016). Others believe that rankings are largely' based on what can be measured rather than what is relevant and should be measured (Harvey, 2008; Altbach, 2006a).

Bougnol and Dula (2015; 860-864) describe a variety of pitfalls in contemporary' ranking systems, criticising the use of “Anti-isotonic attributes", a weighting scheme that uses positive weights for an attributes value so rewarding larger magnitudes. They also refer to "Rewarding inefficiency" whereby a pitfail occurs when inputs and outcomes in a rankings scheme are treated in the same way by' assigning them positive weights. They' also criticise some rankings in terms of "Transparency and reproducibility" whereby a ranking should ideally provide both the data as it was used in the calculation of the scores and the weights. Unfortunately, not all ranking schemes live up to these ideals. Finally, in their analysis there can be problems with ranking schemes resulting from "Colinearity in the data". They argue that co-linearity among attributes’ data is a manifestation of excess information.

 
Source
< Prev   CONTENTS   Source   Next >