Citation databases as indicators of research output

The bibliometric literature has emphasised the importance of taking the impact of research into account in order to produce relevant and meaningful indices (Moed,

2002) . However, using citation databases is not without fault (Van Raan, 2005). The more generic technical problems involved in using citation databases as an indicator for rankings systems have been discussed earlier in this chapter.

The ARWU makes use of three citation databases as indicators for research output, utilising papers published in Nature and Science, Science Citation Index-expanded (SCI) and Social Science Citation Index (SSCI) (ShanghaiRanking Consultancy,

2003) . Huang (2011) is of opinion that the SCI/SSCI paper indicator overemphasises the quantity of output (numbers of published papers) and fails to measure output quality (the citations/uses of those papers). The Nature/Science indicator has similar issues with the prize winner indicators, over-emphasising extremely outstanding research and being biased toward certain subject disciplines (Huang, 2011). Whilst the absence of ‘perfection’ provides a convenient way to criticise work based on the data sets, it is probably more useful to ensure that potential errors and uncertainties are adequately understood and that conclusions are reliable despite the presence of a small level of error (Cram & Docampo, 2014). The real problem is not the use of bibliometric indicators as such, but the application of less-developed bibliometric measures (Van Raan, 2005).

The per-capita performance indicator

This criterion is clearly affected by all the elements of imprecision and inaccurate determination (Billaut et al., 2010). Moreover, the authors of the ARWU do not fully detail which sources they use to collect information on the number of Full Time Equivalent (FTE) academic staff (Billaut et al., 2010). The various definitions of academic staff amongst different countries and universities can distort the measurement relating to institution size and cause comparison validity problems in the resulting ranking (Huang, 2011). If this data on the size of the institution is not available this dimension is omitted and the ranking is based on the weighted average of the other dimensions, which also results in distortion (Harvey, 2008).

Summary

This chapter has dealt with some general and broad fundamental criticisms often levelled at most ranking systems. These range from the philosophical to the practical and from disagreements about what constitutes an excellent university to the questionable value of proxy measures of teaching and learning quality. Perhaps one of the most important issues raised concerns the ability to verify submitted data, particularly given the pressure on those units responsible for rankings submissions in universities around the world. Whilst the HERS insist that they do undertake an audit of submitted data, the key issue is an acceptance that no system of audit would be adequate to satisfy all stakeholders and any audit process is likely to be highly expensive for the rankings agencies who are, after all, in the business to generate income and produce a healthy bottom line.

Whilst many within academia regard the ARWU system of ranking as perhaps the most credible, this chapter has demonstrated that it also has some significant issues in terms of both its methodology’ and, given its remarkable stability, its appeal to the media who are typically looking for changes year on year that they can report on. The use of Nobel or Fields prizes, highly cited researchers, citation databases and per capita performance indicators all pose unique problems in practice which are difficult to reconcile satisfactorily. In Chapter 5, the same critical lens will be turned on the QS WUR which, whilst undoubtedly having media appeal, has potential problems with its chosen methods, veracity’ of submitted data and reliance upon reputational indicators. Chapter 6 considers the youngest of the Big Three HERS, the THE WUR which manifests many' of the potential issues related to the QS WUR as well suffering from the omission of any indicator which can realistically be related to employment. Given that most graduates enter the job market after their first degree, this might be considered an important area of focus for any’ university' and HERS.

References

Aguillo, F., Bar-Ilan, J., Levene, M., & Ortega, J. L. (2010). Comparing university rankings. Scientometrics, 85, 243-256.

Altbach, P. G. (2006a). International higher education: Reflections on policy and practice. Boston: Center for International Higher Education, Boston College. Retrieved from: www. bc.edu/content/dam/files/research_sites/cihe/pubs/Altbach_2006_Intl_HigherEd.pdf

Altbach, P. G. (2006b). The dilemmas of ranking. Center for International Higher Education, 42, 2-3.

Altbach, P. G. (2017). The complex diversity' of Southeast Asian postsecondary' education. International Higher Education, 88, 16-18.

Altbach, P. G., & Hazelkorn, E. (2017). Pursuing rankings in the age of massification: For most - forget about it. International Higher Education, 89, 8-10.

Anowar, F., Helal, M. A., Afroj, S., Sultana, S., Sarker, F., & Mamun, K. A. (2015). A critical review on world university' rankings in terms of top tour ranking systems. In K. Elleithy, & T. Sobh, Lecture notes in electrical engineering 312: New trends in networking, computing, e-learning, systems sciences, and engineering (pp. 559-566). New York: Springer International Publishing.

Baty, P. (2011). Global reputation surveys important to rankings. Retrieved from: www. universityworldnews.com/article.php?story'=20110715 164806119

Baty, P. (2014). The Times Higher Education World University Rankings, 2004-2012. Ethics in Science and Environmental Politics, 13(12), 1-6.

Baty, P. (2017). World university rankings 2018: Now starring a cast of thousands.

Retrieved from: www.timeshighereducation.com/world-university-rankings-2018-now-starring-cast-thousands

Bekhradnia, B. (2017). International university rankings: For good or ill. London: Higher Education Policy Institute.

Bhattacharjee, Y. (2011). Saudi universities öfter cash in exchange for academic prestige. Science, 334, 1344-1345.

Billaut, J. C., Bouyssou, D., & Vincke, P. (2010). Should you believe in the Shanghai ranking? An MCDM view. Scientometrics, 84, 237-263.

Bougnol, M., & Duia, J. H. (2015). Technical pitfalls in university rankings. Higher Education, 69, 859-866.

Bowman, N. A., & Bastedo, M. N. (2010). Anchoring effects in world university rankings: Exploring biases in reputation scores. Higher Education, 61(4), 431-444.

Calderon, A. (2016). Winners and losers in the ARWU ranking. Retrieved from: www.uni versityworldnews.com/article.php?story=201608221 54429700

Connell, C., & Saunders, M. (2012). Mediating the use of global university rankings: Perspectives from education facilitators in an international context. Journal of Studies in International Education, 17(4), 354—376.

Cram, L., & Docampo, D. (2014). Highly cited researchers and the Shanghai ranking. Retrieved from: www.researchgate.net/profile/Domingo_Docampo/publication/ 262182710_Highly_Cited_Researchers_and_the_Shanghai_ranking/links/0deec536e 08796493f000000.pdf

Daraio, C., Bonaccorsi, A., & Simar, L. (2014). Rankings and university performance: A conditional multidimensional approach. Retrieved from: http://risis.eu/wp-content/up loads/2014/08/daraio_bonaccorsi_simar_cond_rankings_TR-n-9-2014.pdf

De Witte, K., & Hudrlikova, L. (2013). What about excellence in teaching? A benevolent ranking of universities. Scientometrics, 96(1), 337-364.

Dill, D., & Soo, M. (2005). Academic quality, league tables, and public policy: A cross analysis of university ranking systems. Higher Education, 49(4), 495-533.

Docampo, D. (2013). Reproducing of the Shanghai Academic Ranking of World Universities results. Scientometrics, 94(2), 567-587.

Downing, K. (2012). Do rankings drive global aspirations? In M. Stiasny, & T. Gore, Going global: The landscape for policymakers and practitioners in tertiary education (pp. 31-39). London: Emerald Group Publishing Ltd.

Downing, K. (2013). What’s the use of rankings? In P. T. Marope, P. J. Wells, & E. Hazelkorn, Rankings and accountability in higher education: Uses and misuses (pp. 197-208). Paris: United Nations Educational, Scientific and Cultural Organization.

Goglio, V. (2016). One size fits all? A different perspective on university rankings. Journal of Higher Education Policy and Management, 38(2), 212-226.

Griffin, S., Sowter, B., Ince, M., & O’Leary, J. (2018). QS World University Rankings 2019 supplement. Retrieved from: www.topuniversities.com/student-info/qs-guides/ qs-world-university-rankings-2019-supplement

Harvey, L. (2008). Rankings of higher education: A critical review. Quality in Higher Education, 14(3), 187-207.

Hazelkorn, E. (2013). How rankings are reshaping higher education. In V. Climent, F. Michavila, & M. Ripollès, Los rankings universitàries, Mitos у Realidades. Tecnos.

Hazelkorn, E. (2014). Reflections on a decade of global rankings: what we’ve learned and outstanding issues. European Journal of Education, 49(1), 12-28.

Hazelkorn, E., & Ryan, M. (2013). The impact of university rankings on higher education policy in Europe: A challenge to perceived wisdom and a stimulus for change. In P. Zgaga, U. Teichler, & J. Brennan, The Globalization Challenge for European Higher Education: Convergence and Diversity, Centres and Peripheries. Frankfurt: Centre for Social and Educational Research.

Holmes, R. (2005). Evaluation basis flawed. New Straits Times, 2 December, p. 26.

Holmes, R. (2017). Comments on the HEPI report. Retrieved from: http://rankingwa tch.blogspot.co.za/2017/01/comments-on-hepi-report.html

Holmes, R. (2017). Doing something about citations and affiliations. Retrieved from: http://rankingwatch.blogspot.com/2017/04/doing-something-about-citations-and.html

Holmes, R. (2017). Ranking debate: What should Malaysia do about the rankings? Retrieved from: http://rankingwatch.blogspot.co.za/

Holmes, R. (2017). University ranking watch. Retrieved from: http://rankingwatch.blogspot. co.za/2017/01/comments-on-hepi-report.html

Huang, M. (2011). A comparison of three major academic rankings for world universities: From a research evaluation perspective. Journal of Library and Information Studies, 9(1), 1-25.

loannidis, J. P., Patsopoulos, N. A., Kawoura, F. K., Tatsioni, A., Evangelou, E., Kouri, I., ... Liberopoulos, G. (2007). International ranking systems for universities and institutions: A critical appraisal. BMC Medicine, 5(1), 30.

Kaychen, S. (2013). What do global university rankings really measure? The search for the x factor and the x entity. Scientometrics, 97(2), 223-244.

Liu, N. C., & Cheng, Y. (2005). The academic ranking of world universities. Higher Education in Europe, 30(2), 127-136.

Marginson, S. (2007). University rankings, government and social order: Managing the field of higher education according to the logic of the performative present-as-fiiture. In M. Olssen, M. Peters, & M. Simons, Re-reading educational policies: Studying the policy agenda of the 21st century (pp. 2-16). Rotterdam: Sense Publishers.

Marope, M., & Wells, P. (2013). University rankings: The many sides of the debate. In P. T. Marope, P. J. Wells, & E. Hazelkorn, Rankings and accountability in higher education: Uses and misuses (pp. 1-7). Paris: United Nations Educational, Scientific and Cultural Organisation.

Merisotis, J., & Sadlak, J. (2005). Higher education rankings: Evolution, acceptance and dialogue. Higher Education in Europe, 30, 2-97.

Moed, H. F. (2002). The impact factors debate: The ISI’s uses and limits. Nature, 415, 731-732.

O’Malley, B. (2016). 'Global university rankings data are flawed’ - HEPI. Retrieved from:: www.universityworldnews.com/article.php?story=20161215001420225

Patsopoulos, N. A., Analatos, A. A., & loannidis, J. P. (2005). Relative citation impact of various study designs in the health sciences. The Journal of the American Medical Association, 293(19), 293-369.

Rauhvargers, A. (2013). Global university rankings and their impact: Report 2. Brussels: European University Association.

Rauhvargers, A. (2014). Where are the global rankings leading us? An analysis of recent methodological changes and new developments. European Journal of Education, 49(1), 29-44.

Redden, E. (2013). Scrutiny of QS rankings. Retrieved from: wwwf.insidehighered.com/ news/2013/05/29/methodology-qs-rankings-comes-under-scn.itiny

Savino, M., & Usher, A. (2006). A world of difference: A global survey of university league tables. Toronto: Educational Policy Institute.

Scott, P. (2013). Ranking higher education. In P. T. Marope, P. J. Wells, & E. Hazelkorn, Rankings and accountability in higher education: Usesand misuses (pp. 113-128). Paris: United Nations Educational, Scientific and Cultural Organisation.

ShanghaiRanking Consultancy (2003). About us. Retrieved from: www.shanghairanking. com/aboutus.html

ShanghaiRanking Consultancy (2017). Academic ranking of w'orld universities 2017.

Retrieved from: www.shanghairanking.com/index.html

ShanghaiRanking Consultancy (2017). Discovering world-class: Academic rankings of world universities. Retrieved from: www.shanghairanking.com/: https://drive.google. com/file/d/0Bw2rAawlHlvBUlFDeElPRTlmMFU/view

ShanghaiRanking Consultancy (2017). Shanghairanking’s global ranking of academic subjects. Shanghai: Shanghairanking Consultancy.

Shastry, V. (2017). Inside the global university rankings game. Retrieved from: wwwjivem int.com/Sundayapp/Sxz.P28yPCeSyNUCDpfSYiJ/Inside-the-global-university-rankings-game.html

Soh, K. (2015). Multicolinearity and indicator redundancy problem in world university rankings: An example using Times Higher Education World University Ranking 2013-2014 data. Higher Education Quarterly., 69(2), 158-174.

Sorz, J., Fieder, M., Wallner, B., 8c Seidler, H. (2015). High statistical noise limits conclusiveness of ranking results as a benchmarking tool for university management. Retrieved from: www.academia.edu/ 11815558/High_statistical _noise_limits_conclu siveness_of_ranking_results_as_a_benchmarking_tool_for_university_management

Sowter, B. (2017). How did Vel Tech University get such a high rank on the top Asian universities list? Retrieved from: www.quora.com/How-did-Vel-Tech-University-get-such-a-high-rank-on-the-top-Asian-universities-list/answer/Ben-Sowter-l?srid=TZue

Sowter, B. (2017). Rankings - A useful barometer of universities’ standing. Retrieved from: www.universityworldnews.com/article.phplstory =20161222112407402

Taylor, P., & Braddock, R. (2007). International university rankings systems and the idea of university excellence. Journal of Higher Education, 29(3), 245-260.

Thomson Reuters (2014). About highly cited researchers. Retrieved from: http://high lycited.com/info.htm

Toutkoushian, R. K., Teichler, U., 8c Shin, J. C. (2011). University rankings: Theoretical basis, methodology and impacts on global higher education. New York: Springer.

Van Raan, A. F. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133-143.

Waltman, L., Medina, С. C., Kosten, J., Noyons, С. M., Tijssen, R. J., van Eck, J. W., ... Wouters, P. (2011). The Leiden Ranking 2011/2012: Data collection, indicators and interpretation. Centre for Science and Technology Studies, Leiden University (The Netherlands), 791-802.

Wang, S. (2017). Shanghairanking’s Academic Ranking of World Universities 2017 press release. Retrieved from: www.shanghairanking.com/Academic-Ranking-of-World-Universities-2017-Press-Release.html

Yat Wai Lo, W. (2014). University rankings: Implications for higher education in Taiwan. Singapore: Springer.

 
Source
< Prev   CONTENTS   Source   Next >