Recent amendments to the QS reputation indicators
QS adapted their Academic Reputation and Employer Reputation methodologies during 2015 (for the 2016 edition), by adopting a five-year historical data view instead of the three-year historical view they used prior to 2015. The oldest data collected four and five years ago, is weighted a half and a quarter respectively, of the more recent data (Hang, 2012; QS Quacquarelli Symonds Limited, 2017; Sowter, 2015). Bekhradnia (2017) criticises this amendment suggesting that recycling unchanged responses over a period of five years means that it is possible that QS is counting the votes of retired academics and employers. With regard to the Employer Reputation Survey, QS adopted an equal weighting (50%) attributed to international and domestic responses from the 2018 edition. Previously, international responses accounted for 70% and domestic responses 30%.
The QS faculty student ratio indicator
As illustrated in Table 5.1, 20% of the overall score in the QS WUR is attributed to Faculty' Student Ratio as a proxy for teaching quality' (Bekhradnia, 2017). Despite rapidly developing information technology', many academics believe there is no substitute in conventional universities for face-to-face contact. The controversy surrounding definitions of ‘quality'’ in learning and teaching around the world has been discussed in Chapter 3, with the conclusion that approaches to education differ so greatly (even within the same university') that a globally agreed assessment would be impossible to achieve. Downing (2012) takes a pragmatic view arguing that even though the faculty-student ratio indicator is not a particularly sophisticated assessment of teaching and learning quality', it should provide at least some measure of the amount of time and potential contact students have with lecturers and academic peers (Downing, 2013). Sowter (2015) accepts that students tend to value small groups and the opportunity to consult tutors and asserts that this proxy measure has some validity. Huang (2012) points out that in addition to the difficulty' of obtaining data, the definition of staff and student in each university is not consistent; sometimes the university' might inflate the numbers of faculty, resulting in the indicator failing to accurately' reflect teaching quality and the actual learning environment. However, there is at least some safeguard against this because inflating faculty' numbers would damage the performance on the citations per faculty' indicator since the same faculty' full-time-equivalent (FTE) number is used for both. In addition, Bekhradnia (2017) suggests that universities can decide to appoint research-only' staff who do not necessarily work with students to improve their student staff ratio when the indicator is intended to be a proxy for teaching quality'.
The QS citations per faculty indicator
Citations in leading academic journals are a conventional measure of institutional research strength and the most common source of international academic
The Bijj Three: broad issues and OS WUR detail 67 comparisons. Dividing citation number by the number of staff/faculty' accounts for the size of the institution. The staff number used is not restricted to research faculty' and should be anyone involved in either research, teaching or its administration. Precise data related to faculty' numbers has proved difficult to collect and QS has acknowledged this area might be the focus of future methodological enhancement (Sowter, 2015). The same submitted FTE figure for faculty' for the student faculty' ratio is used in the calculation of the citations per faculty' indicator so if the faculty’ number is inflated by' the institution to obtain a better ratio score it will impact negatively on its score for citations.
QS WUR analysed almost 13 million papers and 67 million citations for the QS WUR 2019 edition, as indexed by Elsevier’s Scopus database (Griffin et al., 2018). Griffin (2018) points out that the average number of citations per faculty' member has increased from 52 citations per academic, in the 2018 edition, to 60 citations per academic, in the 2019 edition. Similarly, the participating institutions increased their research output by' about 12.1% (Griffin et al., 2018). Whilst citation numbers might be regarded as relatively' objective data, using only average citation numbers can favour universities producing only' a small body of papers within which a few were more often cited (Huang 2011). The ratio of citations to staff in the social science field is generally lower than that in the science field as a result of the various citation patterns practiced in different academic fields. This can result in a ranking bias toward specific academic fields (Huang, 2012).