Recent amendments to the THE Academic Reputation Survey

THE used to outsource the administration of their annual Academic Reputation Survey to Thomson Reuters. However, it was announced in late 2014 that THE had engaged a new partner ‘Elsevier’ to assist them with the administration of the questionnaire. Elsevier publishes the Scopus bibliometric database also used by QS.

The citation indicator as part of the research influence pillar

The indicator with the greatest individual influence by weighting in the THE WUR is the citation analysis (30%) (TES Global Ltd., 2015). The generic technical problems using citation databases as an indicator for rankings systems have been discussed in earlier chapters. Despite these criticisms, Baty' (2013) argues that citation analyses demonstrate which research has made the most impact and which studies have been built on by other scholarly' communities to expand collective understanding. Academics constantly stress the inclination of citation indices to favour an institution’s size, English language publications, region and subject specialisation (Huang, 2012; loannidis, et al., 2007; Altbach, 2006). Baty' (2013) counters that normalising the data helps to reflect the variations in citation volume between regions.

Recent amendments to the THE citation indicator methodology

In addition to moving away' from Thomson Reuters, regarding the administration of the Academic Reputation Survey' and attainment of institutional data, THE now draws research publication data from Elsevier’s ‘Scopus’ Database. This

The Big Three: broad issues and THE WUR detail 77 change marks THE’s complete withdrawal from Thomson Reuters (Elsevier, 2014). Elsevier is a world-leading provider of scientific and technical information, and Scopus is the world’s largest abstract and citation database of peer-reviewed academic literature (Elsevier, 2014):

The new database will allow THE to analyse a deeper range of research activity’ from a wider range of institutions than at present, including those institutions from emerging economies that account for a growing portion of the world’s research output and which have shown a great hunger for THE’s trusted global performance metrics. The change will enable THE to utilise SciVal Elsevier’s research metric analysis tool to accommodate continuing innovation in the field of research performance.

(TES Global Ltd., 2015)

Ben Sowter in (Jobbins, 2014) agrees that Elsevier’s Scopus database is a much larger database than that compiled by Thomson Reuters, especially when attempting to evaluate universities outside the very' elite. The restructuring of these activities represents a major undertaking and is likely to lead to an initial set of results with increased volatility'. This means that both QS and THE now make use of the Scopus database for related indicators.

The THE international students, staff and faculty pillar

The ability' of a university' to attract students and staff from across the world is key to global success (TES Global Ltd., 2015). However, Anowar et a. (2015) suggest that a higher proportion of international staff and students cannot always be seen as a positive attribute because international student admission is not only concerned with the quality' of the university'. Political stability', government relations between the country's students are transferred to or from should also all be considered when evaluating internationalisation (Anowar et al., 2015). Recently, as indicated in Table 6.1, THE employed a research collaboration indicator whereby' the proportion of a university’s journal publications that have at least one international co-author is assessed (TES Global Ltd., 2015). This latter move is also evident in some of QS’s regional rankings and expressed as international outlook.

Comparison of the Big Three (ARWU, QS WUR, THE WUR) methodologies

Figure 6.1 shows the differences between the individual methodologies of the Big Three international rankings. All three of these systems use a weight-and-sum methodology' (Soh, 2015). Soh (2015) explains that a weight-and-sum methodology is a set of indicators selected to fit the conceptualisation of a system and are chosen as an operationalisation of academic excellence with data gathered for these indicators. The indicator scores are then weighted, summed, and scaled (Soh, 2015). The QS Academic Reputation Survey (40%) along withthe Employer Reputation Survey (10%) accounts for 50% of the total QS WUR score (QS Quacquarelli Symonds Ltd, 2016). THE employs a Teaching and Research Reputation Survey that together contribute a third (33%) to the total THE WUR score (TES Global Ltd., 2018). The difference between the QS WUR and THE WUR’s academic reputation surveys is that the THE’s survey is restricted to a selected and invited group of published researchers whereas QS allows universities to nominate potential respondents (Holmes, 2017). The Shanghai Ranking’s ARWU doesn’t use reputational surveys at all, relying instead on metrics related to citations and publications and the numbers of alumni and faculty winning Nobel Prizes and Fields Medals (Redden, 2013). In

THE WUR

Comparison of the HERS methodologies

Figure 6.1 Comparison of the HERS methodologies

ARWU

(Cont.)

Figure 6.1 (Cont.)

a priori selected weights, the ranking favours universities for which the weights ‘fit best’ (De Witte & Hudrlikova, 2013, p. 342).

All three ranking systems make use of citation databases to evaluate research and/or research impact (O’Malley, 2016). QS and THE now both utilise Elsevier’s Scopus database to collect citation data (Jobbins, 2014). The ARWU makes use of Nature and Science, Science Citations Index expanded as well as the Social Science Citation index to assess various aspects of research output (Huang, 2011). The ARWU also rely on data from Thomson Reuters to evaluate teaching via the Highly Cited Researchers list (Billaut et al., 2010). Both QS (20%) and THE (4.5%) to a lesser extent, use staff-to-student ratios as a proxy for teaching quality. QS (10%) and THE (10%) attribute higher scores to bigger proportions of measures of internationalisation; THE includes the number of international collaborative publications to its ‘International Outlook Pillar’ (QS Quacquarelli Symonds Limited, 2018; Times Higher Education, 2017). A big difference between these three is that the institutional data (number of academic staff) employed by ARWU is not provided by universities but obtained from national agencies such as ministries, national bureaus and university associations (ShanghaiRanking Consultancy, 2003). This is potentially problematic given the varying national guidelines and definitions for government data submissions, but it does avoid the issues surrounding university submitted data being potentially ‘gamed’. Notably, THE employs more indicators in its ranking procedure. For example, the number of Doctorates awarded scaled against the number of academic staff, the ratio of Doctorates awarded to bachelor’s degrees awarded and institutional income/industry income against the number of academic staff. On the downside, this can lead to probably statistically unsustainable small weighting percentages (Times Higher Education, 2017).

The three unique methodologies not surprisingly produce varying results (QS Quacquarelli Symonds Ltd, 2017). However, in comparing the top ten global universities in recent editions of all three seven universities commonly appear. These are

Massachusetts Institute of Technology', Stanford University, Harvard University, California Institute of Technology’ and University' of Chicago in the US, and the University' of Cambridge and University' of Oxford in the UK. With Stanford usually occupying the highest average, position (QS Quacquarelli Symonds Ltd, 2017).

 
Source
< Prev   CONTENTS   Source   Next >