Higher Education Policy Institute

A report from the Higher Education Policy Institute (HEPI) identifies some issues HERS (referring specifically to THE, QS, Shanghai Rankings and U-Multirank) should consider to improve their annual assessments (Bekhradnia, 2017). One of the recommendations involves auditing and validating the institutional data provided by the universities and, if institutional data do not exist, ranking bodies should refrain from data scraping techniques. Another recommendation sees the international surveys of reputation dropped with the HEPI report arguing that reputation surveys only reinforce research performance and skew results in favour of a small number of institutions. The report also suggests that HERS should

The Big Three: broad issues and AR WU detail 49 move away from research-related criteria and publish rankings in more detailed ways than simple numerical rankings (Bekhradnia, 2017). However, Holmes (2017) criticises HEPI for only focussing on the QS WUR, THE WUR, Shanghai Rankings’ ARWU and U-Multirank suggesting that this gives a misleading picture of the contemporary' rankings landscape which now also includes university rankings of various aspects from innovation to graduate employability. Holmes (2017) also argues that research-orientated ranking systems are not entirely useless for evaluating teaching and learning because a good research reputation is likely to be associated with positive student and graduate outcomes such as satisfaction with teaching, completion of courses and employment. Furthermore, this critique suggests three reasons why QS and other HERS should not be apologetic for resorting to data-scraping techniques; information about universities is more likely to be correct if it comes from more than one source, if it is from a source independent of the HERS or the university', if it has been collected for reasons other than submission to the rankers, or if there are serious penalties for submitting incorrect data (Holmes, 2017).

In defence of the QS rankings, Sowter (2017) suggests using information from university' websites or data scraping as a more accurate approach then assuming zero. Whilst Holmes (2017) agrees with the HEPI report that the weightings attached to the reputation surveys employed by QS and THE are too much, he argues that students do value the perceptions of employers and professional schools and that surveys provide a reality' check when universities are dishonest. Whilst acknowledging significant imperfections in the ranking methodologies, especially' with regard to ‘measuring’ teaching and outreach, Baty (2017) addresses some of the recent criticisms in the HEPI report. He asserts that the THE ranking is not designed to be an end in itself, but rather an output from what he describes as one of the world’s most sophisticated databases of higher education performance data where the weightings and methodologies were developed in consultation with universities, governments and academics. Similarly, Sowter (2017) admits that the QS Rankings are imperfect but refutes the claims made about inadequate data audits, adding that it is one of the costliest and time-consuming aspects of the rankings process which is nonetheless taken very seriously. Others more involved in delivering higher education question the value of rankings and argue that they can only' measure a narrow slice of what quality' higher education is about (Redden, 2013). Both QS and THE continue to expand their online interface and functionality' to compare several aspects of the rankings, which can be filtered by' geography' and/or other dimensions (Baty', 2017; Sowter, 2017).

Selecting appropriate methodology'

The selection of an appropriate methodology' is crucial to any' attempt to capture and summarise the interactions amongst the individual indicators included in a composite measure or a ranking system. Consequently, none of the ranking systems are perfect; each has inadequacies and weaknesses (Anowar et al., 2015). Every' ranking, including the Big Three of QS, THE and ARWU, are regularlycriticised (Hazelkom, 2013; Rauhvargers, 2013; Downing, 2013). It is perhaps impossible for rankings to generalise all institutions in terms of one scale. Therefore, Anowar et al., (2015) suggests an appropriate approach should be developed for defining different institutions effectively. An alternative strategy' available to critics of rankings is to encourage the proliferation of rankings with different methodologies, different weightings and different orientations (Scott, 2013). This strategy takes the view that no single ranking can ever be satisfactory’, but a plurality of rankings may' begin to capture the diversity' of twenty-first-century' higher education (Scott, 2013). What cannot be in doubt is that over the last decade or so, HERS have gained considerable experience in responding to criticism with the consequence that some of the rankings have decided to refine aspects of their methodology' (Griffin, Sowter, Ince, & O’Leary', 2018). Having outlined the broader and more general criticisms of HERS and the Big Three ranking systems, the final section of this chapter includes an analysis of the first of the Big Three rankings, the ARWU. A similar analysis of the remaining two HERS in the Big Three, QS WUR and THE WUR, is undertaken in Chapters 5 and 6 respectively which also considers their approaches as well as recent amendments made to methodology’.

 
Source
< Prev   CONTENTS   Source   Next >