Assessing the Big Three higher education ranking systems: Broad issues and THE WUR detail

Introduction

The latest Times Higher Education World University Rankings 2021 released in September 2020, has revealed the top 1,500 universities in the world across 93 countries. This annual edition once again features more universities than in the past and represents more than 5% of the 20,000 plus higher education institutions in the world. The methodology' for this ranking is based on 13 performance indicators clustered into five academic Pillars: Teaching, Research, Citations, Industry' Income and International Outlook (Masterportal.eu, 2014). According to THE WUR, calculation of their WUR is subject to an independent audit by the professional services firm PricewaterhouseCoopers (PwC) (TES Global Ltd., 2018). In common with QS, THE WUR also receives criticism for its reliance on reputational surveys and the relatively small number of peer respondents. The most recent Academic Reputation Survey (run annually) examined the perceived prestige of institutions in teaching and research. The responses were statistically representative of the geographical and subject mix of academics globally. The 2020 data are combined with the results of the 2019 survey, giving more than 22,000 responses. In comparison, the QS WUR academic survey claims to be based on over 100,000 responses. The other major criticism regularly levelled at the THE WUR is the lack of any indicator related to the employment of graduates which QS covers via the employer reputation survey which garners around 50,000 responses.

The THE World University Rankings (WUR)

The Times Higher Education World University Ranking (THE WUR) was first published in 2004 by THE in cooperation with QS. After 2009, THE ended its cooperation with QS and started working with Thomson Reuters (Rauhvargers, 2013). Up to that point QS had largely been responsible for data collection and analysis and THE were the media partners. In an attempt to develop the new THE ranking system, Thomson Reuters carried out a global opinion survey to find out what higher education professionals and student consumers of rankings thought of the existing rankings (Adam & Baker, 2010). These consumers suggested more information should be provided by any new ranking on all

The Big Three: broad issues and THE WUR detail 73 characteristics of the ranking process and indicators. This survey report also provided THE with information on what indicators consumers valued (Baty, 2014). The new methodology' of THE’s World University Rankings sets out to examine only a globally competitive, research-led elite group of institutions:

Higher education is global. THE is determined to reflect that. Rankings are here to stay. But we believe universities deserve a rigorous, robust and transparent set of rankings - a serious tool for the sub-sector, not just an annual curiosity.

(Mroz, 2009, p. 5)

THE WUR methodology

Similarly to QS, The Times Higher Education (THE) ranking excludes universities which do not teach undergraduates; are highly specialised (teach only a single narrow subject); have published less than 1,000 titles over a five-year period, and not less than 150 in any given year. Universities are also be excluded if 80% or more of their activity is exclusively in one of their 11 subject areas. THE used to partner with Thomson Reuters to obtain institutional data but has since moved this task in-house. The data collection is now carried out by a team of data analysts at THE (Elsevier, 2014).

Bookstein, Seidler, Fieder, & Winckler (2010) analysed several indicators of the THE methodology'. They found that the correlation between staff/student ratio in 2007 and staff/student ratio in 2009 was about 0.84. However, two definite subgroups are evident within the data. The first group represent a set of institutions whose scores stay relatively stable, whilst the second group’s scores change radically from year to year. This major year-to-year change is probably' indicative of changes in definition, interpretation or data submission, rather than changes in organisational membership (Bookstein et al., 2010). A major issue for both THE and QS is perhaps that all ratio-based indicators are subject to changes in definition and interpretation and open to manipulation and abuse by a few unscrupulous submitting institutions.

The academic reputation survey as part of both teaching and research pillars

As indicated in Table 6.1, Times Higher Education utilises an Academic Reputation Survey as an indicator in their WUR methodology' (TES Global Ltd., 2015). In the interests of transparency, THE made the results of the reputation survey public, somewhat at odds from their other rankings indicators. The results of each year’s reputation survey are published as the Times Higher Education World Reputation Rankings (Baty, 2014). This examines the perceived prestige of institutions in both research and teaching (TES Global Ltd., 2015). The survey is based on the subjective judgements of academics considered to be experts within their field (Begum, 2014). Baty (2017) asserts that the respondents are asked action-based questions to elicit more meaningful responses, such as: “where would you send your best graduates for the most stimulating postgraduate learning environment?” (University World News 2007-2018, 2018, para. 18).

Table 6.1 THE WUR methodology (2018)

Criteria and How it is measured weighting

Teaching: Academic Reputation

The learning Survey (15%) environment

(30%)

Rationale for inclusion

Ratio of Faculty to Students (4.5%)

Ratio of Doctoral to Bachelor degrees awarded (2.25%)

Number of doctorates awarded, scaled against the number of academic staff. (6%)

Institutional income scaled against academic staff numbers (2.25%)

Research: volume, income and reputation (30%)

World’s largest invitation-only academic reputation survey (18%)

University research income, scaled against staff numbers and normalised for purchasing-power parity (6%)

Research productivity -research output scaled against staff numbers. (6%)

The Academic Reputation Survey (run annually) that underpins this category is normally carried out between November and February each year. It examines the perceived prestige of institutions in both research and teaching. The responses are claimed to be statistically representative of the global academy’s geographical and subject mix. The responses each year number around 20,000.

The proxy assumes that where there is a healthy ratio of students to staff, the former will get the personal attention they require from the institution’s faculty.

THE believe that institutions with a high density of research students are more knowledge-intensive and that the presence of an active postgraduate community is a marker of a research-led teaching environment valued by undergraduates and postgraduates alike.

As well as giving a sense of how committed, an institution is to nurturing the next generation of academics, a high proportion of postgraduate research students also suggests the provision of teaching at the highest level that is thus attractive to graduates and effective at developing them.

This figure, adjusted for purchasing-power parity so that all nations may compete on a level playing field, indicates the general status of an institution and gives a broad sense of the infrastructure and facilities available to students and staff.

This indicator is also informed by the annual Academic Reputation Survey and looks at a university’s reputation for research excellence among its peers, based on the responses to our annual academic reputation survey.

Income is crucial to the development of world-class research, and because much of it is subject to competition and judged by peer review, our experts suggested that it was a valid measure.

THE count the number of papers published in the academic journals indexed per academic, scaled for a university’s total size and also normalised for the subject. This gives an idea of an institution’s ability to get papers published in quality peer-reviewed journals.

(Continued)

Table 6.1 (Cont.)

Criteria and How it is measured weighting

Rationale for inclusion

Citations: research influence (30%)

Citations made in the six years from 2013 to 2018 are collected, indexed by Scopus. (30%)

International outlook: staff, students and research (7.5%)

Industry income: innovation (2.5%)

The ratio of International to domestic students. (2.5%)

The ratio of International to domestic staff. (2.5%) The proportion of a university’s journal publications that have at least one international co-author. (2.5%) Research income an institution earns from industry, scaled against the number of academic staff (2.5%)

The citations aim to demonstrate how much each university is contributing to the sum of human knowledge: whose research has stood out, has been picked up and built on by other scholars and, most importantly, has been shared around the global scholarly community. The data are fully normalised to reflect variations in citation volume between different subject areas.

The ability of a university to attract undergraduates and postgraduates from all over the planet is considered key to its success on the world stage.

The top universities also compete for the best faculty from around the globe.

This indicator is normalised to account for a university’s subject mix and uses the same five-year window as the Citations: research influence category.

A university’s ability to help industry with innovations, inventions and consultancy has become a core mission of the contemporary global academy.

Source: TES Global Ltd., 2018

The 20-minute questionnaire administered on behalf of THE by Elsevier, is distributed worldwide in 15 different languages based on an invitation-only poll of experienced scholars, who offer their views on excellence in research and teaching within their disciplines (Baty, 2017; Rauhvargers, 2013; 2011). The invitation-only aspect of the THE surveys is potentially problematic and very different from QS who encourage academics and employers to sign up to become respondents and then include them if they meet appropriate background and sampling criteria. Academics involved in Arts and Humanities and Social Sciences published less frequently in journals than academics in the so-called ’hard’ sciences, which is the main reason why the Arts and Humanities and Social Sciences are mainly under-represented in the data (Baty', 2014). In 2017, the best represented subject was the Physical Sciences (16% of the responses), followed by Social Sciences (15% of the responses). The Life Sciences, Clinical and Health, and Engineering each achieved 14% of responses, Business and Economics 13%, Arts and Humanities 9% and Computer Science 5% (TES Global Ltd., 2018). According to THE, 19% of their responses come from North America, 33% from the Asia Pacific region, 27% from Western Europe, 11% from Eastern Europe, 6% from Latin America, 3% from the Middle East and 2% from Africa (TES Global Ltd., 2018).

Alongside the QS Reputation Surveys, the THE Academic Reputation Survey methodology' has also been criticised with Altbach & Hazelkorn (TES Global Ltd., 2018) questioning the validity of obtaining opinions on the teaching ability of individuals who have never been in the classroom. In 2008, Harvey (2008) reviewed the THE WUR and essentially dismissed the trustworthiness and usefulness of the ranking system. Harvey (2008, 191) criticised the way ranking systems treat ‘missing values’ arguing that the large proportion of missing information in the THE survey can distort the survey outcomes. The positional shifts by some institutions (annually), without any plausible explanation, raises questions regarding the reliability' of the THE methodology and data interpretation/submission (Harvey, 2008).

Anowar et al. (2015) complains that the exact process whereby field experts are selected lacks transparency and that the absence of transparency in all parts of the methodology' makes evaluating excellence questionable. As indicated earlier, Book-stein et al. (2010) detected statistical inconsistencies in the THE Academic Reputation Survey scores, when analysed from y'ear to year. For example, the variance of the peer Life Sciences ranking is 0.048 from 2007 to 2008, but a full 0.104 from 2008 to 2009. However, these authors point out that some of this variance could possibly be due to a change in the THE procedure (Bookstein et al., 2010).

 
Source
< Prev   CONTENTS   Source   Next >