Design of Excellence-Driven Policies and Initiatives

The question of the design of excellence initiatives has a number of elements:

• Does the initiative support the development of universities as an entity or certain individual units (departments)?

• Does it encourage mergers and acquisitions?

• What is its scale in terms of money and time?

• How are the universities being selected?

• What are the universities expected to do? What do they suggest to do?

It was found that the largest excellence initiatives were focused on the universities as a whole. There are two explanations that could be drawn from the interviews: (a) international rankings consider whole universities; (b) such design allows all resources of the university to be involved in its development.

It could be claimed that mergers were not the most important part of any of these initiatives. Exceptions are France, Denmark and China—countries which relied significantly on the merger mechanism (Salmi 2009). Also, Russian “Federal universities project”, which was to establish a big regional or macro-regional university through merging existing ones (Froumin and Povalko 2014), illustrates how mergers and acquisitions could be used to implement excellence policy.

Nevertheless, the reason not to use mergers widely proved to be simple— mergers take time; their first stage is very risky, because of disorganization and loss of priorities as shown, for example, from the research on mergers in Finland done by Ursin et al. (2010).

Such famous mergers that have created Manchester University, Aalto University and Strasbourg University happened with the same objective, but outside of the excellence initiatives.

The question of phasing and timing of excellence initiatives is also an important part of policy design. It should be stressed that the number of launched programs and the phasing of excellence initiatives are different from country to country. There was a single excellence program in Australia, Finland, Spain, Norway, for example. In Germany, South Korea, Taiwan multi-phase programs have been implemented, alternatively. The duration of each initiative (or phase) ranges from 3 to 7 years in most cases (Salmi and Froumin 2013).

Most countries adopted open competition as a mechanism to select particular universities which would achieve global competitiveness. Competitive selection is usually based on the previous records of the universities and their development plans. German government evaluated 137 proposals submitted by graduate schools and clusters of excellence, for example Salmi and Froumin (2013). The exceptions to this are China and Taiwan. China picked universities for the project 985 after the review of their performance and potential in a directive way. Taiwan government did the same taking current university-industry cooperation as the key selection criteria. It is important to mention that in all cases the evaluation of these proposals involved international experts. For many countries, such involvement was the first step to the real internationalization of expert decision-making in higher education. Russian government has decided to include leaders of a number of foreign universities from Top 100 of Shanghai ranking into the selection committee. This selection committee was praised by the government and universities for the quality and transparency of its work. As a result, all members of the selection committee were asked to stay as the members of the Project Implementation Oversight Committee which was to monitor the implementation of strategic plans regularly. The most interesting question of this part of the paper is what universities put in their plans? The answer is very straightforward—they put there the actions that directly or indirectly lead to the improvement of performance indicators used in world university rankings.

Simple calculations on performance indicators used by rankings show that research and publication activities worth nearly two thirds of the overall ranking score on the average. Indexes related to quality of the education worth 20 %. International presence comes out slightly more than 5 % in world university rankings. The universities and the ministries respond to this by making the development of improvement plans mainly research oriented. The quality of the education itself, as well as the international component in terms of students and faculty remains on the periphery (Salmi and Froumin 2013).

The study has found that in most cases the design of the universities' plans is based on clear indicators of universities performance. Much emphasis is on the idea that the aim of “pushing” universities for excellence is not only to achieve specific indicators, but to develop within-the-university culture of self-development and change management. However, the majority of the plans do not have specific elements of the design to achieve this goal.

Indeed, when governments start to push higher education institutions for excellence they make demands and requirements for universities' performance and activities. Considering that world university rankings constitute most frequently used complex indicators for conducting excellence-driven policy implementation (Salmi 2009), governments are guided (sometimes blindly) by rankings parameters. The indicators of the global rankings are used to develop and plan not just the outcomes, but the process as well.

Under the influence of rankings, governments make their direct requests for universities' productivity. At the same time, universities introduce their internal performance criteria to be highly ranked in the future. Cumulatively, it leads to the fact that selected universities change the content of their work significantly. The practice shows that in certain circumstances they do it for the worse, but not for the better.

As can be seen from above, on one hand the design of excellence policies fosters positive competition in higher education system; it also triggers the development of research activities. On the other hand, the policy design based on rankings indexes “governs how university administrators shape the policy and direction of institutions themselves in a bid to rise up the rank” (Barber et al. 2013, p. 20). Moreover, there are examples relating to different countries when the design of such initiatives leads to destructive change of emphasis of universities which participate in excellence programs.

When the design of excellence policy is developed and universities start to function according to new circumstances, governments need to support program implementation by monitoring preliminary results to timely adjust it for changing conditions. Also, each government needs to evaluate outcomes of the program it has introduced. These two questions of implementation and outcomes assessment of excellence-driven policies and initiatives are to be discussed in the next part of the paper.

< Prev   CONTENTS   Next >