QS WUR methodology

The QS ranking methodology is often criticised for a lack of methodological transparency (Kaychen, 2013) largely as a result of its reliance on reputational surveys. Despite this criticism QS was one of the first HERS to be ‘audited’ and approved by IREG using the Berlin Principles, one of which relates to transparency. Redden (2013) and Huang (2011) argue that the QS methodology' is particularly controversial due in large part to its greater reliance on reputational surveys than other rankers. When combined, the survey approach accounts for 50% of the QS WUR methodology (Redden, 2013). An academic reputation survey accounts for 40% of the total weighting with a similar survey for employer reputation being weighted at 10% of the total. The QS rankings compare universities across four broad areas of interest to prospective students: research, teaching, employability' and international outlook (QS Quacquarelli Symonds Ltd, 2018). Table 5.1 below illustrates the methodology QS employs to rank universities. Included in the table are the rationales, provided by QS, for using these measurements. The Z-transformation, (or ‘normal’ or ‘standard’ score), is applied to each measure ensuring it contributes the intended amount to the overall score; this involves subtracting the mean score from each individual score, then dividing by the standard deviation of the scores (Sowter, 2015).

Academic and employer reputation indicators

The reputational survey of academics adopted by the QS rankings has long sparked heated debates amongst researchers (Huang, 2012) who generally argue that peer review can easily bias results toward universities of international visibility' (Anowar et al., 2015; Huang, 2012; loannidis, et al., 2007; Taylor & Braddock, 2007). In some ways these arguments provide some evidence that these surveys do at least reflect a university’s global standing, and QS believes that the increased depth and scope of the reputational surveys offers tremendous value to students seeking to know how their prospective university' is viewed by' the academic community and by' employers across the world (Griffin, 2018). Sowter (2015) goes even further in defence of the academic survey indicator arguing that academics are the best people to make judgements about universities.

In the absence of more precise data on teaching and more up-to date comparisons of research, it has become the central element of the QS World University' Rankings “Who better to ask than the people who work in universities to discover the best?”

(Sowter, 2015)

According to Sowter (2015) the scores of the academic survey are more resistant to bias towards English-speaking countries than research citation scores. Respondents are sourced from participating universities, previous respondents and third-party’ databases (Sowter, 2015). The respondents participating in the Academic Survey range across all academic and administrative grades from lecturers to university' presidents. Respondents select a number of universities, excluding their own university', which they regard as the best in the field they are affiliated to (Baty, 2009). In the 2019 edition, QS surveyed over 83,877 academics

Table 5.1 QS World University Rankings methodology according to QS Quacquarelli Symonds Ltd, (2018); (Griffin et al., 2018)

Criteria and How it is measured

weighting

Academic Based on a global survey of

reputation around 100,000 academics

(40%) (growing), in which partici

pants are asked to identify the institutions where they believe the best work is currently taking place within their field of expertise.

Rationale for inclusion

Employer reputation (10%)

Based on a global survey of around 50,000 responses (growing). The survey asks employers to identify' the universities they perceive as producing the best graduates.

Faculty student ratio (20%)

The number of academic staff’ employed relative to the number of students enrolled.

Gives an equal weighting to different discipline areas than research citation counts. Whereas citation rates are far higher in subjects like biomedical sciences than they arc in English literature, for example, the academic reputation survey weights responses from academics in different fields equally.

It also gives students a sense of the consensus of opinion among those who are by definition experts. Academics may not be well positioned to comment on teaching standards at other institutions, but it is well within their remit to have a view on where the most significant research is currently taking place within their field.

Of critical importance to students seeking to make crucial study decisions is the question of future employability. This means that the opinion of employers regarding an institution’s capacity to produce reputable, well prepared graduates provides important insight into university performance.

It assesses the extent to which institutions are able to provide students with meaningful access to lecturers and tutors, and recognises that a high number of faculty members per student will reduce the teaching burden on each individual academic.

Citations per faculty (20%)

International student ratio (5%)

International staff ratio (5%)

QS collects this information using Elsevier’s Scopus database, the world’s largest database of research abstracts and citations. Five complete years of data are used, and the total citation count is assessed in relation to the number of academic faculty members at the university, so that larger institutions don’t have an unfair advantage. The proportion of international students in relation to all students.

The proportion of international staff to the overall staff’ number.

This indicator aims to assess universities’ research output. A ‘citation’ means a piece of research being cited (referred to) within another piece of research. Generally, the more often a piece of research is cited by others, the more influential it is. So the more highly cited research papers a university publishes, the stronger its research output is considered.

To assess how successful a university has been in attracting students and faculty members from other nations.

To assess how successful a university has been in attracting students and faculty members from other nations

(participants) globally to identify institutions they consider best for research in subject area(s) they identify themselves as knowledgeable about (QS Quacquarelli Symonds Ltd, 2018). The number of institutions nominated by the respondents increased by almost 9% from 4,378 institutions (in 2018) to 4,764 (in the 2019 edition) (Griffin, Sowter, Ince, & O’Leary', 2018). Responses are weighted by region and compiled into indices for the five broad subject areas, which are combined with equal weighting to yield the final result (Sowter, 2015).

A recent Employer Survey informing the QS WUR Employer Reputation indicator, which accounts for 10% of the overall score, retrieved university nominations from 42,862 employer respondents, globally (Griffin et al., 2018). Employers nominated about 4,063 institutions from more than 140 countries in the 2019 edition (Griffin et al., 2018). The growing number of participants, and interest in both the academic and employer reputation surveys can be attributed to the growing importance and significance that employers and academics place on the QS surveys (Griffin, 2018). Despite an increasing sample size year on year, and the fact that THE also relies heavily on survey data generally drawn from smaller samples (more about this in the next chapter), many academics continue to criticise what they regard as over-reliance on peer review surveys (Anowar et al., 2015; Kaychen 2013). They suggest that, whilst it may be a valuable tool, some prejudice can still exist through peer conservatism and institutional reputation favoured by age, size, name and country biases (Soh, 2015; Kaychen 2013). This latter argument is particularly interesting given that, before the advent of global rankings, reputation was probably the only way to assess any university’s performance.

After a thorough examination of the earlier 2009 QS ranking results, Huang (2012) expressed concern regarding a few aspects of the QS peer review process; for example, the results were heavily impacted by the number of questionnaires returned from each country. Additionally, the way the questionnaires were distributed globally, and the results calculated, provided clues that QS Rankings generally tended to be more advantageous for the Commonwealth of Nations (Anowar et al., 2015). Furthermore, Huang (2012) argues that most of the returned Academic Survey questionnaires were from three fields: Engineering and IT, Natural Sciences, and Social Science. Most of the Employer review responses came from four industries: Financial services/Banking, Consulting/Professional, Services, Manufacturing/Engineering, and IT/Computer services. Huang (2012) also argues that the way in which the survey is administered suggests that the questionnaire lacks clear parameters which may result in manipulation of responses (Huang, 2012).

QS issued a statement listing ten reasons why its rankings cannot be effectively manipulated. It includes a set of robust processes and procedures to ensure the validity of the resulting measures, including not being able to nominate the institution you are currently employed by. The ten reasons are as follows (Sowter, 2015):

Table 5.2 QS’s process to ensure ranking validity

Strict policy for participation: As a policy, it is not permitted to solicit or coach specific responses from expected respondents to any survey contributing to any QS ranking. Should the QS Intelligence Unit receive evidence of such activity occurring, institutions will receive one written warning, after which responses to that survey on behalf of the subject institution may be excluded altogether for the year in question. Not only are responses found to be invalid discounted from consideration, but any institution found to be engaging in such activity will attract a further penalty in the compilation of the results for the given indicator.

Inability to select one’s own institution: We encourage the respondent to voice their genuine opinion on up to 40 institutions (10 domestic and 30 international). Respondents may not select their own institution.

Sign-up screening processes: The QS Intelligence Unit checks every request to participate in the QS Global Academic Survey through the academic sign-up facility for validity and authenticity. Only those who have passed the screening process will be contacted.

Sophisticated anomaly detection algorithms: The QS Intelligence Unit routinely runs anomaly detection routines on its survey responses. These algorithms are designed to detect unusual jumps in performance or atypical response patterns. Responses are not meeting certain parameters are removed, and institutions showing unusual or unlikely gains are scrutinised in-depth.

Market-leading sample size: Only a large, concerted, and, therefore, detectable, effort to influence the results is likely to have an effect.

Academic integrity': Whilst there will be exceptions in any population, academics typically place great value on their “academic integrity'”. We believe the vast majority' of our respondents give us their unfettered opinion of the institutions they consider strongest in their field, regardless of whether or not any external party has tried to influence their decision through direct or indirect means of communication.

International emphasis: The survey analysis is designed so that international responses are strongly favoured over domestic responses. Influencing international responses is a much more difficult task than affecting the opinion of domestic academics, who arc more likely to be familiar with universities in their own country.

Three-year sampling: Responses are combined with those from the previous two years, eliminating the older response from anyone who has submitted in more than one year. This diminishes the influence of any changes in response patterns in the current year. To have a substantial impact, any effort to influence the results would have to be sustained for three years.

Watch list: The QS Intelligence Unit maintains a list of institutions which have qualified themselves for additional scrutiny in our process, known as the “Watch List”. Any institution seen to be attempting to influence the outcome is automatically added to this list. When we conduct our analysis, we will examine responses in favour of Watch List institutions with particular care, to ensure that they receive no undue advantage.

QS Global Academic Advisory' Board: The QS Global Academic Advisory' Board consists of thirty' esteemed members of the international academic community whose task is to uphold the integrity of the methodology behind any of the QS rankings. Executive members of the board include John O’Leary', Martin Ince, Ben Sowter and Nunzio Quacquarelli, the four originators behind the World University Rankings when it was first launched in 2004. Collectively, these executive members have over 50 years’ experience in ranking universities.

 
Source
< Prev   CONTENTS   Source   Next >