Mapping the Governance of the Bologna Process

Setting Goals and Delegating Responsibilities

The BP/EHEA for the most part rather unproblematically conforms to the first two dimensions of the model, though somewhat distancing itself from the ideal-type over time. The Bologna Declaration set out a series of (six) clearly delineated objectives with relatively broad margins of interpretation, whose detailed translation was then left to the competent national or sub-national authorities—e.g. calling for the “adoption of easily readable and comparable degrees”, based “on two main cycles”, which sets a clear direction without, however, prescribing particular structures. The early development of Bologna, moreover, could be seen as focusing on an interconnected set of goals concerned with mobility, comparability and quality assurance.

This balance, relative to the ideal-type of experimentalist governance, was then somewhat blurred in the later development of the process. Latterly, the trend has been to add further topics or areas for discussion, while eschewing more specific goal setting. Thus, for example, wide-ranging topics such as the relationship of the EHEA to the European Research Area (ERA), “lifelong learning”, and the “social dimension” have been added—but for the most part without clear objectives being agreed by the participating states comparable to those seen in the earlier stages of the process.

The absence of goal setting in this way does not, of course, preclude meaningful discussions in transnational fora or the opportunity to share “best practice”. Comparative data may also, of course, be gathered under these rubrics, surveying national patterns and practices. The absence of specifically defined objectives does, nonetheless, have inescapable implications as regards the more direct use of benchmarking and peer review techniques.

Reporting and Peer Review

Overall, this is a somewhat more problematic phase of the process. Some commentators have viewed this as a “success story”. Ravinet (2008), for example, argues that it is essentially through the effective use of benchmarking and peer review that the Bologna Process may be seen to have gone from a system of “voluntary participation” to one of “monitored cooperation”. Following her analysis, “countries feel increasingly bound by their commitments” because of: (1) The multiplication of information sources acting as a check on the accuracy of national reports and (2) The strength of socialization pressures (“naming and shaming”) on poor performers by their peers to effect the necessary reforms.

This appears to be, however, a somewhat overly positive or optimistic account of the process, where at a minimum a marked unevenness of implementation has routinely been cited as a major problem (Amaral and Veiga 2012). These are also findings which stand squarely at odds with the findings of Dr. Gangolf Braband and myself in our 2010–2012 “Euro-Uni” research project.[1] In our interview sample, all European-level participants highlighted the excessive presence of “green” in BP benchmarking exercises (indicating full achievement of the relevant objective in a “traffic light” system), noting inter alia the difficulty to “dissociate implementation from prestige”[2] (particularly in the case of generally poorly performing states). National level participants, conversely (and predictably), defended the robustness of their reporting techniques and attendant data, but even here not in terms which would back the second—socialization—component of Ravinet's analysis. While stressing that they accurately reported outcomes, national officials nonetheless equally stressed that the use made of the results—i.e. whether it would be a spur to (further) reform—was essentially determined by national agendas. “European pressure”, in other words, largely came into play only where this pressure corresponded to prior (often “uploaded”) national commitments. In the words of one long-serving national official, “You put something on the European agenda because it suits your own domestic needs”, as such “It creates a pressure to follow up a commitment you made in the first place. It is a bit of a chicken and egg question.”[3] The 2012 BP Implementation Report also appeared to acknowledge this more critical reading of the reporting and peer review process. The report, tellingly, noted that “the colour dark green is less prevalent in some action lines than before” (EACEA 2012, p. 7), reflecting “a more nuanced insight” as regards the yardsticks used for measurement or an extension in the scope of the indicator. The affirmation, obviously, is one of improvement—but in so doing also acknowledges the fairly widespread sense of some of the limitations of the (previous) reporting system.

An overall balance sheet of the (in-)effectiveness of the reporting process is beyond the scope of this short paper, but the broad tone of the 2012 report would seem to capture the underlying reality. Essentially, it is clear that the process of reporting and peer review has progressively improved over time. Primary information gathering has become more systematic, external checks have been multiplied, and the evaluation of data has become more consistent. This does not preclude the possibility of (egregious) national misreporting in individual cases— actors who “manipulate the information they provide so as to show themselves, deceptively, to best advantage” (Sabel and Zeitlin 2010, p. 13). It equally must be qualified by an awareness of the possible limitations of the sources used for the triangulation of data, potentially subject to the same unevenness as the primary data which they are meant to check (cf. Geven 2012). It does, however, point to a situation in which it could reasonably be argued that the mechanisms of reporting and peer review have attained a minimum level of robustness such that this is not/no longer the weak link in the chain of a model of experimentalist governance. At the level of the overall process, the quality of the information available appears broadly such as to allow for meaningful, evidence-based deliberation. If this deliberation has not taken hold in the terms or to the extent that one might have expected, the key thus lies elsewhere—as discussed below.

  • [1] The project, funded as a competitively awarded internal research project by the University of Luxembourg, sought to examine the dialogical dynamics leading to the creation of a “higher education policy space” spanning the national and European levels. In the course of our research, we conducted semi-structured interviews with 15 senior national and European-level policymakers, focusing on the European institutions and selected West European states. See further Harmsen (2013)
  • [2] Interview with a senior European-level official, 22.07.2011
  • [3] Interview with a senior national-level official, 06.06.2012
 
< Prev   CONTENTS   Next >