Manipulating the methodologies and ‘gaming’
As soon as you create a ranking system, you also create a whole system for gaming the rankings (Spicer, 2017, para. 7). Universities can choose to employ numerous strategies to improve performance in rankings, some of which will bring about real improvements that benefit students and other stakeholders, whilst other initiatives are undertaken solely with rank in mind. There exists what is perhaps a natural human desire to control rankings to make them feel less like an imposition, and to mitigate some of the pressure exerted by rankings (Espeland & Sauder, 2015). All rankings have vulnerabilities, which can be exploited by universities to try to improve their rank (Wint & Downing, 2017). A minimum investment of resources can create a questionable rise in the rankings (Holmes, 2017). Many instances of universities allegedly misrepresenting institutional data, recruiting staff and/or survey responses to artificially improve ranking have been identified (Holmes, 2017, Perez-Pena & Slotnik, 2012). A handful of universities have been caught ‘gaming the system’ by purposefully misinterpreting rules, cherry-picking data or lying (Perez-Pena & Slotnik, 2012, para. 2). Perez-Pena & Slotnik (2012) highlighted a number of examples involving the US News and World Report Best Colleges Rankings, Iona College acknowledged that they had lied for years about test scores, graduation rates, retention rates, acceptance rates, alumni donations and their faculty-student ratios. Similarly, Claremont McKenna also acknowledged artificially inflating SAT scores (Perez-Pena & Slotnik, 2012; Brody, 2012). Additionally, in 2008, Baylor University offered financial rewards to admitted students to retake the SATs in an attempt to increase their average score (Perez-Pena & Slotnik, 2012; Rimer, 2008).
Recently, a university in Saudi Arabia made impressive strides in various rankings by offering part-time contracts to dozens of highly cited researchers requiring them to put this university as their secondary affiliation thereby acquiring an increased number of citations from a rankings perspective (Holmes, 2017). Their progress slowed down as the major HERS removed the factor of secondary affiliation from their bibliometric parameters (Shastiy, 2017).
In 2017, Chennai’s VEL Tech University was ranked the top university in Asia according to the citations’ indicator in the THE Asia Ranking (regional ranking) even though the university did not do very well in other rankings (Shastiy, 2017). After some analyses Ben Sowter (2018) Head of the QS Intelligence Unit concluded that the results were due to one researcher citing himself excessively over the previous two years, in a journal where he served as associate editor (Holmes, 2017). The regional modification applied by THE can also lead to a disproportionate score if it collects a large number of citations for a relatively small number of papers (Holmes, 2017).
One particular vulnerability of the QS WUR is the potential to game the reputation surveys. In recent years some Latin American and Asian universities have received academic and employer survey scores which are much higher than scores obtained for any other indicator (Holmes, 2017). These institutions, named by Holmes (2017), were from Japan, China, Brazil, Chile and Colombia. In 2016, QS accused Trinity College (Dublin) of being guilty of breaching the rankings guidelines by sending letters to graduates and academics reminding them of the QS and THE evaluation (reputation surveys). Trinity College defended their letters by stating that they did not attempt to influence the response of the participants, but merely to increase awareness and survey participation (O’Sullivan, 2016). Similarly, O’Sullivan (2016) recalls an earlier incident, involving University College Cork, whereby the president sent staff a letter proposing that they contact their
The global higher education arena 95 international contacts to make them aware of the QS Reputation Survey. The very' existence of rankings inevitably leads to competition and some institutions and individuals will push somewhat blurred boundaries and guidelines beyond what the HERS intended. Furthermore, sometimes relatively junior staff are entrusted with gathering and calculating the institutional data with insufficient oversight from their senior managers. These staff are also often acutely aware of the pressure (expressed or implicit) to do well in the rankings exercises and the temptation to enhance their submissions artificially to achieve a rise in the rankings is often too much to resist. They know they are likely to regarded as incompetent if the university' suffers a drop in its rank and might possibly be praised if the university' goes up in the rankings. Promising careers can be enhanced or destroyed in this environment whereas in reality' it the performance of the whole institution that is being scrutinised rather than the competence of a few individuals with submission responsibility'. Therefore, it is crucially' important to ensure ranking submissions are subject to senior management scrutiny within the institution at all stages to protect the integrity' of the institution and mitigate potential reputational risk. This also ensures that senior management take rightful responsibility for each submission to the HERS.