Menu
Home
Log in / Register
 
Home arrow Education arrow Evaluating Collaboration Networks in Higher Education Research: Drivers of Excellence
Source

Global and National Rankings

University rankings classify higher education institutions through combinations of various factors. The great diversity in rating methodologies causes a lack of consensus and sometimes doubts and dilemmas about their consistency. We know that any ranking is controversial and no ranking is absolutely objective (Liu, 2015). But looking beyond the ranking game, with winners and losers, some studies seek to identify and compare the main rankings of universities. Rankings can be classified according to geographic scope: national, if they refer to one country; regional, if they have a region scope; and global, if they have a supranational scope, involving several countries.

Qakir and colleagues (2015) made a comparative study of global and national rankings. They focused on 8 global rankings,1 namely ARWU, HEEACT, Leiden CWTS, SCImago, Quacquarelli Symonds (QS), Times Higher Education (THE) Ranking, URAP, and Webometrics, and 12 national ranking systems from Qakir and colleagues’ different countries: Brazil, Chile, China, Kazakhstan, Lithuania, Macedonia, Malaysia, Pakistan, Poland, the UK, the USA, and Turkey. They found out that the national rankings, specific to each country, tend to include a large number of indicators primarily focused on education and institutional parameters, while global rankings systems tend to include fewer indicators and focus on the research performance. This can be seen in Table 5.2, which shows the top ten indicators employed by the national and global ranking systems.

Qakir and colleagues (2015) identified a total of 210 indicators (190 in national ranking and 44 in global rankings). They organized those indicators in four set of dimensions: (1) indicator coverage, (2) size dependency, (3) input and output characteristics of the indicator, and (4) indicator subcategories (research indicators, education indicators, institutional facts and figures). Note that research indicators include academic publications and impact, research capability and funding, and technology transfer. In general, it seems that collaboration indicators are not yet a concern of both national and international rankings (Table 5.3).

The dissemination of national rankings can be positive. In developing countries, for example, they can provide a rich picture of the status of higher education in the country. They also have a potential for improving

Table 5.2 Top ten indicators employed by the national and global ranking systems

Top indicators in national rankings

Frequency (N = 12)

Student per faculty (full time)

10

Quality of entering students

8

Faculty with PhD (%)

8

Publication (SCI, SSCI)

5

Percentage of international students

5

International publications per faculty

5

Expenditure per student

4

Number of accredited doctoral programs

4

Total citations

3

Quality of education

3

Total citations

3

Publication (SCI, SSCI)

3

Excellence rate (SCImago top 10 %)

2

Research excellence survey

2

Student per faculty (full time)

2

Publication (SCI, SSCI) per faculty

2

Percentage of international students

2

Percentage of international faculty

2

Number of Inlinks from third parties

1

Citation per faculty

1

62 EVALUATING COLLABORATION NETWORKS IN HIGHER. Table 5.3 Indicators classification

Dimensions Categories

Description/subcategories

Coverage of indicator

National

Global

National and global

Used only by the national ranking systems Used only by the global ranking systems Used by both the national and the global ranking systems

Size dependency

Size-independent

indicators

Size-dependent

indicators

Subjective indicators (based on opinion surveys)

Input and output characteristics of the indicator

Output

Research output (e.g., number of publications) and educational output (e.g., doctoral degrees awarded, employer satisfaction with graduates)

Process

Educational, managerial, and research processes (e.g., academic governance, institutional assessment on the curriculum)

Input

Financial, educational, and research resources (e.g., budget, research funds)

Institutional

properties

Institution-specific capacities and capabilities (e.g., number of study programs, number of faculty/staff)

Indicator subcategories

Research indicators

Academic publications and impact Research capability and funding Technology transfer

Education indicators

Student profile and services Academic programs and accreditation Alumni

Institutional facts and figures

Teaching quality assurance and assessment Postgraduate student profile Faculty profile and development Educational facilities and resources Managerial and organizational activities

existing global ranking methodologies through comparative analysis and benchmarking (Qakir et al., 2015).

European University Association (Rauhvargers, 2013) examined the most popular global university rankings, in particular ARWU, THE, and QS rankings. Some rankings focused solely on research such as the Taiwanese HEEACT and the CWTS Leiden Ranking (Rauhvargers, 2013). This report results from the work of EU Working Group on Assessment of University-Based Research (AUBR), which focused on the methodologies of research evaluation rather than on rankings. Special attention was also given to the development of multi-indicator resources such as the EU-supported U-Map and U-Multirank, and the OECD AHELO feasibility study on student learning outcomes. This study further indicated how rankings create opportunities and threats for universities’ development, thus driving to different results, depending on the way how they are used. At a strategic level, some universities use data, compiled from rankings, for the purpose of benchmarking exercises that in turn feed into institutional strategic planning.

If we consider how each ranking measures the institutional quality, it is possible to identify limitations of each ranking. This understanding gives us the awareness of how the results of the rankings may be biased. On a more pragmatic level, it is necessary to recognize that the rankings can be a key factor in obtaining additional resources, recruiting more international students as well as attracting strong partner institutions.

Simon Marginson (2014) critically compared university rankings: (1) Shanghai Ranking (ARWU), (2) Leiden (CWTS), (3) QS, (4) Scimago, (5) THE, and (6) U-Multirank. This author uses six criteria to evaluate the various rankings. Notice that Marginson, after analyzing those rankings, considers that Leiden and Scimago rankings are those that provide the most appropriate information in terms of Social Sciences.

Next, we briefly present Leiden ranking because of its innovative use of collaboration and impact indicators, as well as for its ability to provide more appropriate information regards Social Sciences.

Leiden Ranking

CWTS Leiden ranking 2015 offers valuable information about the scientific performance of 750 major universities worldwide. Except for the publication output (indicator P), all included indicators have two variables: size-dependent and size-independent. Size-dependent indicators are obtained by counting the absolute number of publications of a university that have a certain property, while the variable size-independent indicators are obtained by calculating the proportion of the publications of a university with a certain property. For instance, the number of highly cited publications of a university and the number of publications of a university coauthored with other organizations is a size-dependent indicator.

Leiden ranking provides three types of indicators: (1) publications indicators, (2) citation impact indicators, and (3) scientific collaboration indicators. With respect to impact indicators, it is to be noted that for the 2015 ranking the citations were counted by the end of 2014 and the self-citations were excluded. All indicators, except for TCS (total citations) and MCS (media citations), are normalized for differences in citation practices between different scientific fields. Tables 5.4 and 5.5 show the impact and collaborative indicators, with their acronyms and definitions.

CWTS Leiden ranking 2015, evaluating the period 2010-2013, was based on publications indexed by Thomson Reuters’ Web of Science (Science Citation Index Expanded, Social Sciences Citation Index, and Arts & Humanities Citation Index). Publications of books, publications in conference proceedings, and publications in journals not indexed in Web of Science were not included. Within Web of Science, only the so-called

Table 5.4 Impact indicators: Leiden ranking

Acronym

Impact indicators

P (top 1%) and PP (top 1%)

The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 1% most frequently cited.

P (top 10%) and PP (top 10%)

The same, but belong to the top 10% most frequently cited.

P (top 50%) and PP (top 50%)

The same, but belong to the top 50% most frequently cited.

TCS and MCS

The total and the average number of citations of the publications of a university.

TNCS and MNCS

The total and the average number of citations of the publications of a university, normalized for field and publication year. An MNCS value of 2, for instance, means that the publications of a university have been cited twice above the average of their field and publication year.

Acronym

Collaboration indicators

P (collab) and PP (collab)

The number and the proportion of a university’s publications

that have been coauthored with one or more other

organizations

P (int collab) and PP (int collab)

The number and the proportion of a university’s publications that have been coauthored by two or more countries

P (industry) and PP (industry)

The number and the proportion of a university’s publications that have been coauthored with one or more industrial partners

P (<100 km) and PP (<100 km)

The number and the proportion of a university’s publications with a geographical collaboration distance of less than 100 km, where the geographical collaboration distance of a publication equals the largest geographical distance between two addresses mentioned in the publication’s address list

P (>5000 km) and PP (>5000 km)

The number and the proportion of a university’s publications with a geographical collaboration distance of more than 5000 km

Source: Leiden ranking (2016)

major publications have been included. These are publications in international scientific journals, according to the Web of Science criteria. This platform provides statistics not only at the level of science as a whole, but also in terms of the following five fields of science: biomedical and health sciences, life sciences and earth sciences, mathematics and computer science, physical sciences and engineering, and humanities and social sciences.

Note that Leiden ranking has restrictions, not taking into account the total scientific output. Only the so-called Core Collection ofWeb ofScience is considered as a source. The Core Collection is a set of publications in international scientific journal according to the Web of Science criteria:

  • 1. The publication has been written in English.
  • 2. The publication has one or more authors (anonymous publications are not allowed).
  • 3. The publication has not been retracted.
  • 4. The publication has appeared in a core journal.
  • 5. The journal has an international scope, as reflected by the countries in which researchers publishing in the journal and citing to the journal are located.

6. The journal has a sufficiently large number of references to other core journals, indicating that the journal is in a field that is suitable for citation analysis. Many journals in the field do not meet this condition. The same applies to trade journals and popular magazines.

Thus, the Leiden core criteria limit the number of journals to be considered. For example, Arts and Humanities or Management trade journals are excluded.

When dealing with the various rankings, what is important is to know what the indicators of inclusion and exclusion criteria are so that we can understand what is being evaluated.

U-Multirank

Another approach of interest to the visualization of collaboration data is U-Multirank (http://www.umultirank.org/). U-Multirank presents itself through the project Web site as a new multidimensional user-driven approach to the international ranking of higher education institutions. This ranking is based not only on research; it also takes into account the various aspects of the diverse missions of universities such as teaching and learning research, knowledge transfer, international orientation, and regional engagement.

Looking in detail only at the research dimension, we have the following indicators: citation rate, research publication in absolute number and in size-normalized, external research income, art-related output, top-cited publications, interdisciplinary publications, postdoc positions, and publication output. The sunburst graph is downloadable and can be used to give an at-a-glance picture of an institution’s performance in the key dimensions of university activity. So, instead of a ranking position, resulting from the aggregation of information, we have an overview of where all indicators are at the institutional level, represented by the size of the rays: a large radius means high performance in this indicator. If we take as an example the case of the University of Aveiro, Portugal, the size of each sector indicates the intensity of performing in that dimension (see Fig. 5.1). While this methodology provides deeper information, collaboration processes, a research production key factor, are not yet observable. It is possible to visualize the different profiles of institutions and to compare

University of Aveiro U-Multirank performance profile. (Source

Fig. 5.1 University of Aveiro U-Multirank performance profile. (Source: U-Multirank)

them in different aspects of their activities, which allows us to infer their degree of expertise in a given area. Instead of a league, we have a tool to see the expertise of universities, choosing partners with complementary expertise.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel