‘Lies, damned lies and statistics’
This chapter is perhaps controversially titled after a much-misquoted phrase, which seems particularly apt when describing the view many academics hold of Higher Education Ranking Systems or HERS for short. Perhaps the earliest citation of the phrase can be attributed to Arthur James Balfour, 1st Earl of Balfour, as quoted in the Manchester Guardian in 1892:
Professor [Joseph] Munro reminded him of an old saying which he rather reluctantly proposed, in that company, to repeat. It was to the effect that there were three gradations of inveracity - there were lies, there were damned lies, and there were statistics.
(Manchester Guardian, 29 June 1892)
HERS are often treated with contempt by many academics who are quick to point out methodological and other flaws (Altbach, 2006a; Dill & Soo, 2005; Downing, 2012). Even amongst academic supporters of rankings, HERS are treated with suspicion and criticised for a wide variety of reasons from the philosophical to the practical. The publication of ranking lists is now greeted with trepidation by many University Presidents and is often followed by intense questioning from the media who are interested to know what lies behind any annual rise or fall in rank on the global, regional or local stage. Competition between universities has undoubtedly intensified with the rise and expansion of Higher Education Ranking Systems (HERS), and many researchers agree that HERS, and the publication of annual rankings, has influenced all participating institutions to some extent (Espeland & Sander, 2015; Hazelkorn & Ryan, 2013; Rauhvargers, 2014). The growing interest in HERS has sparked debate about the nature and validity of the various HERS and their methods (Altbach, 2006a; Dill & Soo, 2005; Downing, 2012). HERS have different parameters/indicators, including publication and citation counts, student/faculty ratios, percentage of international faculty and students, number of awards and achievements, number of research papers per faculty, web visibility and the number of articles published in high impact journals, to name but a few (Aguillo, Bar-Ilan, Levene & Ortega, 2010).
Controversy and criticism
Higher education has long been dominated by a reputational hierarchy of institutions that now sustains and reinforces HERS (Locke, 2014; Rauhvargers, 2014). Marginson (2007) points out that rankings reflect prestige and power, and rankings confirm, entrench and reproduce prestige and power. Rankings are criticised for many reasons including the use of mostly quantitative indicators, proxies to represent the quality of teaching and learning and the reliance on publications written in English (Kehm, 2014; Rauhvargers, 2014). Despite the range of opinions and arguments about the legitimacy of rankings as a construct, experts generally agree that they are here to stay (Downing, 2012; Hazelkorn, 2014). Therefore, the current question is less about whether universities should be compared and ranked, but the manner in which this is undertaken (Marope &Wells, 2013).
Scrutiny of HERS methodologies has increased considerably since 2009 (Baty, 2014). A Frequent criticism of HERS is that many ranking systems rely on poor indicators, such as reputational indicators, despite increased criticism from peers (Rauhvargers, 2014). The arts, humanities, and to a large extent the social sciences, remain underrepresented in rankings because of unreliable bibliometric data (Hazelkorn, 2013). Citation impact is still determined more reliably through indicators that measure the proportion of articles in intensively cited journals, and thus favours those fields in which these articles are concentrated, namely medicine, natural sciences and engineering (Waltman et al., 2011). Marginson (2007) argues that the measures of internationalisation some ranking systems employ are a better indicator of a university’s marketing fimction, rather than the international quality of its researchers. Student-to-faculty ratios are easily manipulated by institutions (Baty, 2014). The quality of teaching and learning and the ‘value added’ during the educational process eludes comparative measurement (Dill & Soo, 2005; Liu & Cheng, 2005). A lack of internationally standardised definitions makes it difficult to make valid comparisons across universities and countries (Waltman et al., 2011). Another problem according to Rozman and Marhl (2008) relates to the different cultural, economic, and historical contexts in which various higher education institutions fimction. Even some leaders of HERS acknowledge that universities and their characteristics can differ greatly, no matter where they are ranked in the various ranking systems (Sowter, 2013). Consequently, in particular at international level, there should be an awareness of possible biases, and the objectives of rankings have to be clearly defined. Scott (2013) elaborates on some other shortcomings of rankings methodology and identifies four key points:
- • Ranking data are often used for other purposes like resource distribution.
- • More generously funded institutions can attract students of higher quality and would, most probably, lead to higher employment ratings.
- • A dearth of reliable data about the teaching (the primary fimction of the university).
- • Ranking systems subjectively and deliberately attach weightings to the amount of relative worth of each ranking criterion.
Judgements and decisions based on university rankings should be made with knowledge and a clear understanding of the methodology utilised during the ranking process (Liu, 2013). Sowter (2013) admits that all ranking critiques have validity; however, HERS have contributed to transparency and accountability among institutions and contributed toward a culture of performance evaluation in higher education. Despite volumes of criticism and boycotts by some universities and schools, rankings have become a popular reference point for decision and policy makers (Hazelkorn, 2014). They have also produced their antithesis in the form of alternatives and have sparked a conversation about the role, value and contribution of higher education (Hazelkorn, 2014).