No such thing as bad publicity
Despite being criticised for being superficial, arbitrary’, and lacking any’ real measure of quality (Meredith, 2004) and the ongoing debate about their uses and validity (Altbach, 2006; Brooks, 2005; Dill & Soo, 2005), the Big Three global university rankings nonetheless immediately’ secured great prominence in higher education, policy, and public arenas and have already had discernible effects in institutional and policy behaviours (Marginson & van der Wende, 2007). There is evidence to show that university rankings have exerted substantial effect on high-level administrators at US colleges and universities (Bastedo & Bowman, 2010). With the advent of the publication of the third of the Big Three systems in 2010
(THE-WUR), Downing (2010a) proposed that it is time to choose your rankings ‘poison’ carefully. He undertook a highly detailed analysis of major global rankings tables and found that universities in Asia have been rising steadily through the QS-WUR rankings over the previous three or four years, with many more making it into the top 200 world institutions by 2010 (Sharma, 2010) than in previous years. This is a very' positive boost for young, ambitious universities in regions like Asia, the Middle East, Eastern Europe and Latin America that aspire to secure a high ranking in the league tables.
Despite this, there remains a surprising level of agreement between ranking systems on which universities are ‘the best’: Harvard, Yale, Princeton, MIT and Stanford in the US; Oxford and Cambridge in the UK; the University' of Toronto in Canada; ANU and the University' of Melbourne in Australia, and Peking and Tsinghua in China. Such similar ranking results among the top universities in each country' indicate that the indicators used in most university rankings might really' be measuring some underlying characteristics such as institutional age and funding per student (Michael, 2005; Usher & Savino, 2006). More variation between rankings occurs lower down the scale, where even small changes in the methods can change institutional rank significantly. This suggests that less well-known, but nonetheless nationally or regionally respected universities might benefit from insights into their relative strengths and weaknesses from their rankings in different global university rankings. Notwithstanding this, some commentators continue to refer to rankings as an unavoidable ‘poison’ which has proved fatal in terms of some promising academic careers. They are a wonderfully accurate measure of progress whilst the university' is rising in global prominence but contain a criminally inaccurate set of non-representative criteria when the university is heading in the other direction (Downing, 2010a).
It is an ill wind...
As Downing (2010b) put it, all these HERS recognise the growing impact of the global environment on higher education systems and institutions, and the importance placed on some means of identifying institutional excellence by' prospective consumers. Some of these consumers have the advantage of government funded or subsidised opportunities to access higher education, whilst others will be spending their own hard-earned money on obtaining the best education possible for themselves or, more likely, their offspring. Downing (2010b) argues that in almost every walk of life, we can make informed choices because we are provided with appropriate ways of assessing the quality of what we purchase and can narrow down the choice of products we wish to investigate further. The advent of rankings has undoubtedly made it easier for these consumers to access information about an institution (as a whole) that will assist with that choice. The status of World Class University (WCU) is regarded as both a symbol of national achievement and prestige and as an engine of economic growth in a global knowledge economy. Therefore, the global rankings have prompted an increasing desire to achieve high-rankings research university' status within national systems (Marginson & van der Wende, 2007).
The good the bad and the ugly
So the question remains what is good, what is bad and what is ugly about the rankings? Well their impact upon the management and governance of universities should not be underestimated because it has undoubtedly brought good, bad and sometimes downright ugly consequences. For some universities it has provided a useful set of indicators against which to benchmark their global performance (good), whilst for others it has taken on an unhealthy prominence that diverts senior managers from their otherwise valid vision and mission (bad). Sometimes an annual rise or fall in the rankings of a few places is seen as indicating a longer-term change in performance and many careers are made or broken over a few (often poorly understood) parameters, when the greater good done by a particular university in its local or regional community' is largely ignored (ugly). The advent of global rankings has meant that many governments in developing countries are investing more heavily in higher education to ensure they can claim a high global ranking or WCU status (good), but some of these countries are neglecting investment in those institutions with a community or local mission in favour of those identified as having a global goal (bad). Some administrators in institutional research offices, who bear responsibility' for rankings data submission, are either openly' or inadvertently put under terrible pressure to ensure their institution rises in the rankings and regarded as inept when the desired rise does not take place. This can lead to attempts to ‘game’ the rankings and, in some cases the end of some promising careers and reputations (ugly). Undoubtedly, the global rankings have put pressure on universities to maintain accurate and professional databases that evidence the value they add to society and stakeholders (good). However, in some universities the concentration on citations as an indicator of quality' in the rankings has led to a proliferation of medical, engineering and science subjects, which traditionally attract higher citation counts, to the detriment of the arts and humanities (bad). However, some universities recognise the importance of the arts and humanities to a civilised and cultured society', not to mention their potential impact on academic reputation scores. One of the Big Three ranking systems recognises that most university' students do not go on to study at postgraduate level but go on to work as professionals in our societies and include an employer reputation indicator in their methodology'. The other two ignore this important fact in their global rankings (bad). Some universities mistakenly believe that if they engage in sponsorship or place advertising with a ranking agency' they will rise in the rankings (ugly).
The real issue in relation to the good bad and the ugly comes down to how the global rankings are used by' the intelligent people who run our universities and countries. When used as indicators of absolute global positioning in terms of the quality' of any' individual university', rankings are undoubtedly bad. As consumer services for students and their families, and benchmarking tools for universities and their faculty' members, they are probably useful and good, particularly when the consumer is armed with sufficient knowledge about their criteria and methods and when used with a modicum of common sense. They are rarely ugly when they are transparent and used positively to bring about improvements in teaching and research quality and institutional strategy' in line with the vision and mission of a particular university, but of course they are always less than beautiful when your university is on a downward spiral with all that entails. The choice of which ranking any university chooses to use for promotional or strategic purposes is inevitably self-serving. Relatively young ambitious institutions, particularly those outside of Western Europe and the US, might prefer to major with the QS-WUR or THE WUR, whilst older and richer institutions who can afford to nurture and employ Nobel Laureates and Field’s Medallists will prefer the ARWU.
Lies, damned lies and rankings?
Global rankings have introduced more transparent, albeit sometimes imperfect competition, with some systems providing opportunities for younger universities and their graduates to establish themselves on a regional and global stage. Conversely, this competition should prevent some of the older and better-known institutions from ‘living off their reputations’ because they can see that there are ‘new kids on the block’ who are challenging their historical dominance. There are some signs that this is beginning to drive a virtuous circle of competition, which drives improvement and punishes complacency. This can only be good for global higher education provided that the various consumers of the global rankings use them judiciously. This returns us to the title of Chapter 1 which refers to ‘lies, damned lies and statistics’. If we wish to avoid this becoming ‘lies, damned lies and rankings’ we should treat rankings with the same critical eye that we use when interpreting statistics, checking and auditing the veracity of all data used in the rankings exercise and ensuring all institution-submitted data is rigorously verified at senior management level before submission. Consumers, often prospective students and their parents but also academics and their managers, need to be educated that a particular rank is as imperfect as an IQ score which purports to measure intelligence when intelligence or IQ is often defined as ‘what intelligence tests measure’. The same principle applies to rankings whereby the rank of any given institution is defined by what a particular HERS measures and might not be a measure of true excellence.