Policy Implementation
How are the selected means (policy instruments) applied to achieve the formulated goals of a structural reform? Over the years, public policy and higher education implementation studies have convincingly demonstrated that a policy is not necessarily implemented ‘according to plan’. During implementation, reform plans can take their own course. Moreover, policies may deliberately not be specific on their implementation; acknowledging implementers may be in the best position to take decisions during the implementation.
In the policy implementation literature, three perspectives prevail: top- down, bottom-up and synthesis perspectives. The top-down approach is based on a set of assumptions such as policies having clearly defined goals and instruments and policymakers having a good knowledge of the capacity and commitment of the implementers (Birkland, 2001). The focus is on creating structures and controls to acquire compliance with the goals set at the top. Early implementation studies in the 1970s revealed that these assumptions frequently are not met (Pressman and Wildavsky, 1973; Lipsky, 1980). They pointed among other things to the impact implementers (street-level bureaucrats) can have on the actual process and outcome of a reform. Because those that implement the reform always have some discretionary power, they are ultimately decisive for the reform implementation. In general, higher education research suggests that this applies strongly to higher education because of the characteristics of its institutions (e.g. van Vught, 1995).
The assumptions of the bottom-up approaches are in sharp contrast with the top-down assumptions. Goals are considered ambiguous, and compliance can be problematic when values and interests of programme designers and implementers differ (Torenvlied, 1996; John, 1998).
An example of a synthesis approach is Sabatier’s (1988) Advocacy Coalition Framework. While starting with a bottom-up approach, Sabatier also incorporates top-down elements in his framework. He explicitly recognises that implementation does not take place in a one-to-one relationship between designers, implementers and targets, but is rather contained within a political (sub)system.
The implementation process and its outcomes are dependent on a large number of factors and conditions such as the availability of time and sufficient resources, the assumptions of the policy itself, its clarity, and credibility, the interests, views, expertise, resources, capacities and support of the implementers, risk management, ownership, leadership and securing buy-in from those affected (e.g. Hogwood and Gunn, 1984; Goggin et al., 1990; Birkland, 2001).
Higher education studies on policy implementation also point to the distance between the policy plan and those at the shop-floor level that are expected to make the reforms work (e.g. Cerych and Sabatier, 1986; van Vught, 1989; Gornitzka et al., 2007). Higher education institutions are autonomous institutions rather than hierarchically subordinate bureaucrats, and as the result policies may not meet their initial objectives, as a number of studies convincingly show (e.g. Kogan et al. 2006; Kohoutek and Westerheijden, 2014; Westerheijden et al., 2007; Musselin, 2005; Trowler, 2002). In short, the particular nature of higher education institutions, generally known for their fragmented, bottom-heavy decisionmaking authority and loosely coupled structures, as well as the nature of the goods and services they are supposed to deliver, is likely to affect the implementation of structural policy reform.
Furthermore, these higher education studies indicate that compatibility, relative advantage (profitability), complexity, observability and organisational capacity explain the adoption of a reform (van Vught, 1989; Bartelse, 1999). Compatibility refers to the degree to which the policy ‘fits’ the existing institutional context. Profitability depends on the advantages of compliance for those affected by the reform; this concerns buy-in and agreement on objectives. It denotes whether those involved think they will reap (sufficient) benefits from the reform. Complexity of reforms denotes the numbers of goals pursued and their interdependence; increasing complexity makes them less likely to succeed as more things can go wrong in implementation (Sanderson, 2000). Observability has to do with the existence of (formal or informal) indicators of the reform. In recent years, much attention has been given to the development of indicators to assess reform success. It could be questioned whether this led to reforms focusing on achieving what is measurable rather than on aiming for what is relevant (Hood, 2006; King et al., 2008). Finally, organisational capacity is a measure of the ability of those affected to change their structure, behaviour and culture to comply with the reform goals and aims.