Methodological Considerations

Researchers can use many methods to evaluate policies, such as impact analyses, cost-benefit studies, policy process studies or implementation studies; each of these based on qualitative and/or quantitative methods (for an overview, see Moran et al. 2006). Since we are trying to answer a question of meaning in this paper, we have opted for an implementation study, based on interviews and documentary analysis. Indeed, we thought that the most straightforward way to answer questions about the 'meaning' of policies was to interview the people who deal with this policy on a daily basis. These people can tell us how policy instruments relate to their professional practice, and what the boundaries of the policies are. Thus, we used a tradition of 'interpretive policy research' that treats discourse of people and policy documents as sources of 'data' (cf. Schatz 2009; Schwartz-Shea and Yanow 2012). There are two main implications of using 'discourse' as a source of information.

The first implication is that conceptual boundaries that are clear in theory may not be so clear in practice. For instance, a document such as the 'European Standards and Guidelines on Quality Assurance' makes a distinction between 'internal' and 'external' quality assurance (Dill and Beerkens 2010). 'Internal' quality assurance refers to the evaluations initiated by people inside the universities; 'external' quality assurance, on the other hand, refers to evaluations undertaken by the government or other actors 'external' to the university. For our interviewees, however, both 'internal' and 'external' evaluations are seen as imposed by 'others' (Geven et al. 2014), thus rendering this conceptualisation inadequate to understand the experiences of our interviewees.

In order to avoid getting stuck in this conceptual swamp, we use one general term, 'evaluation', to denote the various assessments that take place in the universities. These include accreditation, quality assurance (both internal and external), research assessments, audits, and various other forms of assessing work in universities. In this sense, we follow the literature about the 'audit culture' in universities (Power 1997; Shore and Wright 1999). While this conceptual lumping may be confusing for those working in different fields of evaluation, we think that our approach stays close to how people inside universities think about all these forms of evaluations. Indeed, we were cautious about imposing our theoretical preconceptions onto our interviewees.

A second implication is that our results cannot be considered 'objective'. All our recommendations have been developed from a qualitative interaction between the researchers and the interviewees. Interviewees may confuse certain policy instruments with each other, or talk about seemingly unrelated issues. They may be experts on the subject, or it may be the first time that they are thinking about evaluations. Perhaps this is the closest we can come to an overview of the policy implementation, since these are the very people dealing with implementation.[1]

Table 2 An overview over the universities in which we carried out fieldwork

University

Geographic location

Type

Size

University of West Timisoara

Timisoara (South-West Romania)

Comprehensive public university

Medium (>10,000 students)

Babes-Bolyai University

Cluj-Napoca (central Romania)

Comprehensive public university

Large (>30,000 students)

Gheorghe Asachi University

Iasi (North-East Romania)

Specialised (technical) public university

Medium (>10,000 students)

Romanian American University

Bucharest (South Romania)

Comprehensive private university

Small (<10,000 students)

Lucian Blaga University

Sibiu (central Romania)

Comprehensive public university

Medium (>10,000 students)

Note Based on correspondence with administrators at the universities

We carried out fieldwork in 5 Romanian universities, representing different institutional types, and different geographical regions of Romania. The universities were selected because these are considered to be good performing universities, who take the evaluations seriously. In addition, we made sure to include four different regions (South West, Centre, North East, South) and to include at least one private university (Romanian American University). Table 2 gives a broad overview over these universities.

Field-visits took place between December 2012 and June 2013, gathering the views of 310 interviewees in 186 conversations (some interviews had multiple participants). Interview participants were selected according to their professional roles as decision-makers (i.e. rectors, vice-rectors, deans), faculty (professors), administrators (i.e. secretaries), students, and QA-personnel (see Fig. 1 for the distribution of interviewees). Interviews were carried out in the English or Romanian language, following the preference of the interviewee. Notes were taken in English and analysed using qualitative data analysis software. We developed a coding scheme to identify main themes and problems, as well as possible suggestions.[2]

In order to better understand what kind of policies we are talking about, we also carried out an analysis of policy documents (primarily legal texts, policy papers, quality assurance and evaluation guidelines). These were coded along similar lines as the interview notes, allowing us to map the concerns of interviewees onto the specific policies and procedures. We will present the results of this analysis in the following sections.

Fig. 1 Interviewees' role, broken down by university

  • [1] Some scholars may have questions about whether our interpretations are the 'right' ones. In order to increase the validity of our findings, we allow others to replicate our findings (see next note)
  • [2] The interview transcripts and coding scheme are available upon written request to the authors. Interviewee names are anonymised here, but can be fully traced if more information is required. Each interviewee signed a consent form detailing this procedure
 
< Prev   CONTENTS   Next >