Developing a framework for research evaluation in complex contexts such as action research

Eileen Piggot-Irvine and Deborah Zornes


This chapter describes the establishment and application of a new research evaluation framework: the Evaluative Action Research framework (EvAR), which was employed in the Evaluative Study of Action Research (ES AR). We use italics throughout the chapter to illustrate the way in which the EvAR framework phases and elements were applied and tested on the ESAR as an example.

We begin with an overview of the EvAR framework which our seven- strong team of international researchers developed in order to provide detail and clarity for the way we would conduct the ESAR. This beginning section of the chapter introduces the six phases and multiple elements of the EvAR framework as well as its visual representation.The six phases are: 1) preparation; 2) reconnaissance; 3) implementation; 4) review of achievement; 5) reporting on achievements/recommendations and knowledge mobilisation; and 6) continued action for improvement. In the overview, we include two of the features of the EvAR which have been given limited emphasis in other evaluative frameworks. First is the critical importance of establishing protocols for an evaluative research team working together. Second, little mention is made in other evaluative frameworks of the importance of conducting an initial deep review of the literature to ensure the evaluative framework matches the context to be evaluated.The overview section concludes with discussion of the way the EvAR aligns with the underpinnings and values of action research (AR).

Six sections follow, where each phase of the framework is detailed using ESAR as an illustration (shown in italics). Particular emphasis is placed on the following elements within the implementation phase: setting purposes, benefits, indicator establishment, participants/boundary partner engagement, methods utilised, and analysis of data.

In the conclusion section, we emphasise the importance of the process of a review and reflective stance when employing a research evaluation framework and provide a summary of the team’s reflections throughout the ESAR project as an example. Such a stance is seldom featured in traditional evaluative frameworks though it is a strong component of AR. Extensive knowledge mobilisation is discussed next in the conclusion. Finally, we sum up the effectiveness of the EvAR as an evaluative framework developed for the ESAR project.

Overview: the evaluative framework

In this overview, we provide an outline of the EvAR framework phases and elements. We want to clarify that our development of the framework did not occur until some months after the ESAR instigation. It was only after the team had somewhat intuitively engaged in what we describe as Phases 1 and 2 that we realised we were creating an evaluative framework which differed from many others we explored. Such a distinctive framework was essential for our work in evaluating over 100 complex AR projects globally in the ESAR. A different framework was also needed to fit our equally complex international research team who wished to uphold the principles and values associated with AR itself in the evaluative research.

The EvAR (Figure 4.1) begins with the crucial initial step of a research team creating clarity around the way they will work together. This preparation phase (Phase 1) includes establishment of some form of agreement and protocols covering principles and values. We found little emphasis of this step in other framework outlines.

The reconnaissance phase (Phase 2) is also rarely mentioned in evaluative frameworks. In this phase, two steps are included which are associated with becoming informed prior to implementing a research evaluation. First, we recommend a probing literature review be conducted prior to initiating evaluative research in order to gain foundational understanding of the complexity' of research evaluation frameworks. This is followed by an investigation of the context such a framework will be employed for. We offer that such an exploration is critical to becoming sufficiently informed to move to the second step of selecting and justifying an appropriate framework and constituting elements.

The implementation phase (Phase 3) of the EvAR incorporates process elements. Phase 3 begins with setting purposes, identifying the benefits of the study', articulating objectives and research questions, and establishing indicators for evaluation. The importance of determining the appropriate participants in a study is outlined, as is stakeholder (also noted as boundary partners in this chapter) engagement. Next, elements covering the methodology and methods selected and data analysis employed are noted. We do not offer a prescriptive approach in the EvAR framework, but rather allow for deliberate flexibility' and considerable choice of tools.

Following the articulation of constituting elements, the review of achievement phase (Phase 4) is discussed. In this phase, the focus is on gathering data regarding the effectiveness of the research evaluation process with meta-reflection and reflexivity as guiding approaches.

The review of achievement phase is followed by the reporting achievements, recommendations, and knowledge mobilisation phase (Phase 5). We believe that, ideally, reporting out and knowledge mobilisation could occur throughout

54 Eileen Piggot-Irvine and Deborah Zornes

the entire application of the framework in much the same way that AR itself often includes iterative reporting to enhance ownership and further input on findings.

The continuation arrow shown in Figure 4.1 for the EvAR indicates that ongoing action is likely to result from Phases 4 and 5. The sixth and final phase of the framework, the continued action for improvement phase (Phase 6), encourages the evaluative researchers to be responsive to emergent needs for further improvement in their evaluation.

The EvAR, as its name suggests, follows an AR philosophy and process. In summary, we developed an AR-based evaluative framework that could be

Evaluative Action Research

Figure 4.1 Evaluative Action Research (EvAR) framework utilised to evaluate AR in our ESAR in what could be described as a meta- AR approach to “function as an umbrella process, a meta-methodology, under which a variety of flexible methods can be assimilated” (Dick et al. 2015, 38). Meta implies that an AR model is employed at a higher level; in the case of the EvAR it is AR on AR. We note strongly, however, that the EvAR could be adopted as an evaluative research framework for many other types of research.

The framework has all the hallmarks of AR, including combined data collection (systematic research and inquiry) and change (action) phases (Davison, Martinsons, and Kock 2004; Dehler and Edmonds 2006; Gosling and Mint- zberg 2006; Piggot-Irvine et al. 2011). Like AR, the EvAR transcends disciplinary, institutional, and international boundaries with a central focus on research with (and alongside) boundary partners (all stakeholders) and communities (Cardno 2003; Greenwood and Levin 2006; Reason and Bradbury 2001; Stringer 2014).This inclusive quality urged in AR and the EvAR is associated with enhancing the capacity of groups and organisations to own and sustain change. As Greenwood and Levin indicated, ‘AR is a set of self-consciously collaborative and democratic strategies for generating knowledge and designing action in which trained experts in social and other forms of research and local stakeholders work together’ (2006, l).The change orientation alongside the underpinning collaborative and democratic values and strategies sets AR, and EvAR, apart from most traditional forms of research and evaluation. We strongly believed that any evaluative framework which was low in collaborative, participative, democratic, and transformative intent would likely be rejected by those involved in the AR projects we wanted to investigate, and most importantly that such a framework would likely lead to low ownership of findings by those involved in the projects.

The EvAR, like AR, also includes an emphasis on openness to unpredictability and flexibility (Coghlan and Brannick 2014; Stringer 2014).The EvAR therefore sits well within broader thinking about complexity where order and predictability are limited (Kurtz and Snowden 2003). Such openness matched our needs for the ESAR because we wanted to be responsive to increasing demands to track, demonstrate, and measure the impacts and outcomes of research (Axelrod 2002; Carroll 2003; Conteh 2013; Giroux and Giroux 2006a, 2006b, 2009; Popp et al. 2014), but we also wanted flexibility in our framework sufficient to deal with the highly diverse 100-plus AR projects to be evaluated.

Flexibility is also linked to the EvAR pragmatic orientation to method employment that had previously been designed by one of the team (Piggot- Irvine 2012a). Greenwood (2014) defined such a pragmatic approach in AR as that which:

will use theory and methods from any corner of the sciences, social sciences and humanities if they offer some hope of helping a collaborating group move forward. If numbers are needed, statistical social science, surveys and other formal techniques can and will be used.

It is also pragmatic, in Metcalfe’s (2008) terms, in that findings can be created which are meaningful and help those impacted to construct understanding and design actions relevant to their community. The framework, in keeping with AR, has a cyclic, iterative depiction that sometimes has spin-off (McNiff 1988), or slightly divergent, cycles. This cyclical orientation (iterative planning, acting, reflecting, and evaluating within larger cycles or phases) is supported by multiple authors (e.g., Coghlan and Brannick 2014; Piggot-Irvine et al. 2011; Preskill and Torres 1999; Sankaran.Tay, and Orr 2009).

The EvAR is also associated with further underpinning principles that are not so typical of AR, with some indicating enhanced expectations of rigour. The latter include: focusing on research that evaluates precursors, processes, outcomes and impacts; establishing clarity' of this focus via evaluation indicators that are both bibliometric and non-bibliometric; and considering complexity by' seeking to understand meaning (largely through qualitative [Ql] data) as well as searching for causality (through quantitative [Qn] data).To avoid repetition, we note that each of these principles is covered later in discussion of the individual elements of the framework.

Each phase of the EvAR is detailed in the subsequent sections of this chapter, using the ESAR as an example (illustrated in italics).

Phase 1 - preparation

The inclusion of a preparatory phase, as we describe it, has not been mentioned in traditional evaluative research frameworks we explored. The following outline of Phase 1 therefore exclusively describes the employment of the phase in the ESAR project as an example.

In our initial work together in the ESAR as seven internationally dispersed researchers with varying levels of understanding and experience of either or both AR or evaluation, we decided that we could not progress without establishing consensus about commonality of values, principles, and protocols for working as a cohesive, highly collaborative team. Such an important element is widely valued in AR itself, and was a priority for the ESAR team because, as noted in Rowe, Graf, Agger-Gupta, Piggot-Irvine, and Harris (2013), ‘the grounding of a change initiative in early stage elements of thoughtful inquiry, collaboration, dialogue and reflection often mitigates resistance and enhances progress on implementing a change agenda’ (4).

The team spent two days on this preparatory phase, and the resulting documents developed have continued to provide reference points throughout our work together. Without extensively reporting on the content of the documents, we note briefly that working together authentically in collaboration with each other and with all boundary partners dominated our protocols. The approach to collaboration drew strongly upon the six preconditions for collaboration outlined in Piggot-Irvine (2012b) as: ‘trust; shared goals; shared language; a desire to participate; openness and listening; and passion for the process’ (2). We also noted the following advantages of collaboration as stated in Piggot-Irvine and Bartlett (2008): the advantages to the participants of collaboration in action research are cited as many and various (D’Arcy 1994; Kernmis and McTaggart 1988;Tripp 1990; Wadsworth 1998). For one thing, it can allow for public testing of private assumptions and reflections; that is, it helps to avoid self-limiting reflection (Schon 1984). Collaboration can also enhance ownership and commitment to change and it can leverage the change to a level frequently unattainable through individual reflection alone.


As a research team, we felt that just collaborating, as a principle, would be insufficient and that the collaboration and democratic values ofAR needed to be linked to dialogue if trust were to be an outcome. Dialogue is associated with open, non-defensive (Argyris 2003) interactions where bilateral (considering two sides) and multilateral (considering multiple sides) conversation dominates. Dialogue is characterised by two essential components. The first is the offering of openness about perspectives by those collaborating, alongside provision of evidence and reasoning behind those perspectives (an advocacy approach). The second component is that of receiving, checking, and understanding of others’ perspectives without pre-judgement, or assumptions (an inquiry approach), so that mutual understanding can be reached. Preskill and Torres (1999) summed up the dialogue orientation resulting from this advocacy and inquiry balance in suggesting: ‘individuals seek to inquire, share meanings, understand complex issues, and uncover assumptions’ which facilitates ‘learning processes of reflection, asking questions, and identifying and clarifying values, beliefs, assumptions and knowledge’ (53).

Once the ESAR team had created a shared understanding of the values and protocols for working together, we were ready to dig deeply into the literature associated with our framework development task in the reconnaissance phase.

Phase 2 - Reconnaissance

As suggested in our introduction, we propose that considerable investigation of evaluative frameworks and existing knowledge of the context which a framework will be employed within could precede construction of any research evaluation framework. In this statement, there is a premise that we believe a conceptual framework is necessary despite the existence of advantages and disadvantages. Baxter and Jack (2008), for example, suggested that one advantage of a framework lies in its ability to serve as an anchor for a study. They also noted, however, that framework construction may be constraining and limit an inductive approach. In our experience, such reconnaissance investigation therefore can help to clarify why and how any research evaluation might occur.

We propose a reconnaissance phase in the framework which includes literature reviews on the two foundational topics of: 1) research evaluation frameworks themselves; and 2) the context of the specific evaluation research to be conducted in order to enhance the possibility of a framework-context match. The following discussion outlines the two foundational topics with illustration via the ESAR.

58 Eileen Piggot-Irvine and Deborah Zorncs Foundational topic 1 - exploring frameworks

A vast range of research evaluation frameworks exist, including: the Research Excellence Framework (Parker and van Teijlingen 2012); STAR METRICS, which aims to ‘assess and understand the performance of research and researchers, largely for accountability purposes, using data mining and other novel low burden methods’ (Guthrie, Wamae, Diepeveen, Wooding, and Grant 2013, 2); Excellence in Research for Australia (ERA); Canadian Academy of Health Science Payback Framework; National Institutes of Health Research (NIHR) Dashboard; Productive Interactions; Evaluation Agency for Research and Higher Education (AERES) framework; Congressionally Directed Medical Research Programme (CDMRP); Performance Based Research Fund (PBRF); and Standard Evaluation Protocol (SEP).

Many of the traditional frameworks we explored focused largely on measuring impact through a process of external peer review and frequently emphasised the use of quantifiable bibliometric indicators. A recent trend is toward frameworks incorporating both bibliometric and non-bibliometric indicators. The Payback framework is an example of the latter, and is used extensively in health research internationally (Buxton and Hanney 1996; Buxton, Hanney, and Jones 2004; Donovan and Hanney 201 l).The Payback framework incorporates both academic outputs and wider societal benefits to; assess outcomes (knowledge production such as journal articles, etc.), target future research, build capacity, inform policies and project development, create health and health sector benefits such as better health and health equity, and enhance broader economic benefits (Buxton and Hanney 1996).

Frameworks incorporating both bibliometric and non-bibliometric indicators often fall under the cluster of Social Impact Assessment Methods (SIAMP1) and have ‘a central theme of capturing ‘productive interactions’ between researchers and stakeholders’ (Penfield, Baker, Scoble, and Wykes 2014, 24). The focus is on understanding how research interactions lead to social impact.The Australian Research Quality Framework (RQF),for example, uses a case study approach to demonstrate and justify public expenditure on research and asks researchers to provide ‘evidence of economic, societal, environmental, and cultural impact of their research’ (Penfield et al. 2014, 24). Though RQF was never implemented, it was adapted for the United Kingdom Research Excellence Framework (REF) which continued with the case study approach, adding significance, depth, spread, and reach as further non-bibliometric criteria for assessment. Here, depth and spread refer to ‘the degree to which the research has influenced or caused change, whereas spread refers to the extent to which the change has occurred and influenced end users’ (Penfield et al. 2014, 24).

In general, as Guthrie et al. (2013) offered, trade-offs are associated with any framework construction decisions in evaluation of research.Trade-offs are summarised as follows:

  • • Quantitative approaches (those which produce numerical outputs) tend to produce longitudinal data, can be applied relative to fixed baselines reducing the need for judgement and interpretation, and are relatively transparent, but they have a high initial burden (significant work may be required at the outset to develop and implement the approach);
  • • Formative approaches (which focus on learning and improvement rather than assessing the current status) tend to be comprehensive, evaluating across a range of areas, and flexible, but they do not produce comparisons between institutions;
  • • Approaches which have a high central burden (requiring significant work on the part of the body organising the evaluation process) tend not to be suitable for frequent use;
  • • Approaches which have been more fully implemented tend to have a high level of central ownership (by either the body organising the evaluation, or some other body providing oversight of the process); and
  • • Frameworks that place a high burden on participants require those participants to have a high level of expertise (or should provide capacity building and training to achieve this).
  • (8-9)

Overall, individual frameworks have specific strengths and limitations, and each should be weighed up in choosing a framework. Penfield et al. (2014) suggested the following limitations that we considered associated with:

  • Time lag: outcomes and impacts can take years to materialise, and it may be very difficult, if not impossible to trace them back to the project/research;
  • Developmental nature of the impact: impact changes and develops over time and can be temporary or long lasting;
  • Attribution: over time, it becomes more and more difficult to tie outcomes, and especially impacts, directly back to the research and the research findings;
  • Complementary assets: over time, as various factors and inputs influence the outcomes, it becomes difficult to attribute the outcome back to the original research and findings);
  • Knowledge creep: typically, new data, discoveries, and information become accepted and absorbed over a long period of time; and
  • Gathering evidence: in many cases, the requirement to collate evidence retrospectively may be difficult, as measures, baselines, and evidence itself has not been collected and may not be available.
  • (25-27)

Despite a less inductive orientation indicated with employing a framework, we decided that we needed an evaluation framework to guide the complex ESAR and we embarked upon an exploration of the varied frameworks we have described in this section. Initially, we sought to find a framework which was a good fit for our planned study, or find aspects of multiple frameworks that might help guide us.

We considered that both bibliometric and non-bibliometric indicators were relevant in our research evaluation. We also decided to adopt considerations from Guthrie et al. (2013, 19) including that the framework might: promote learning and development and quality improvement, that is, it could have analysis and accountability purposes; be an iterative process; draw out wider social, economic, and policy impacts; minimise administrative burden; hold transparency with rules and processes; include team-based research; apply collaborative (including cross-disciplinary, cross-institution) research; support capacity building and development of next generation researchers; and be helpful if it gathered longitudinal data to support quality improvement.

Foundational topic 2 - investigating theAR context

In the reconnaissance phase, we consider that a probing literature review could also cover exploration of the context in which the research evaluation will be conducted. Further, such review could include examination of the extent to which the context has previously been evaluated.

In the ESAR, our literature review of the AR context confirmed our knowledge that AR is frequently seen as a popular developmental research methodology with combined data collection (research) and change (action) elements (Piggot-Irvine et al. 2011). Earlier in the ‘Overview section’ of this chapter, we summarised many of the other principles ofAR, including its: pragmatic, responsive, iterative, and flexibly applied action orientation with a core element of systematic research and inquiry processes; ability to transcend disciplinary, institutional, and international boundaries; and focus on research which is inclusive of boundary partners in order to democratically enhance the capacity of groups and organisations to sustain change, develop resilience, and thrive. We have also reported on the degree of unpredictability and contextual and cultural specificity ofAR, and such characteristics have a consequence of non-generalisable findings (Coghlan and Brannick 2014; Stringer 2014). AR is also variably defined (Cardno 2003; Kemmis 2010; Meyer 2000; Piggot-Irvine et al. 2011; Wicks and Reason 2009) with subsequent implementation that is also highly variable.

The principles of AR summarised in our probing literature review of the evaluation context led us to conclude that the complexity of the large-scale ESAR we were planning called for an overarching framework which differed from any of the traditional research evaluation frameworks we examined. We had dual overarching needs because we wanted to be responsive to increasing demands to track, demonstrate, and measure the impacts and outcomes of research, but we also wanted flexibility in ourframework. The framework needed to be pragmatic and flexible enough to deal with the context and practice diversity of the 100-plus AR projects to be evaluated, but also needed to match the responsive, collaborative, democratic, and dialogical underpinnings and values associated with AR itself if we were to gain ownership, respect, and credibility from action researchers. We believed that any evaluative framework which was low in collaborative, participative, democratic, and transformative intent could be rejected by those involved in the AR projects we wanted to evaluate, and most importantly that it could likely lead to low ownership of findings by those involved in the projects.

Establishing a rationale for the ESAR framework was relatively easy because our literature review revealed a gap in terms of evaluation ofAR. The touted high ideals ofAR shown in the literature review, alongside its variable interpretation and implementation, almost set up the approach for substantial critique with it referred to as ‘muddled science’ (Winter 1987, 2), ‘sloppy research’ (Dick 2004, 16), with reporting as ‘little more than picturesque journeys of self-indulgent descriptions’ (Brooker and Macpherson 1999, 210). Koshy, Koshy, and Waterman (2011) added that change associated withAR was hard to measure and there was often poor theory development. As a team, we concluded that such critique prevails because little evaluative data exists to demonstrate whether the ideals espoused for AR are widely realised. The paucity of evaluative data was strongly expressed by Piggot-Irvine and Bartlett (2008), who stated that there was a great deal of literature discussing or identifying what constitutes good AR, but very little evaluation ofAR outcomes or impact. A strong rationale for the ESA R was able to be articulated in our framework, and our next task in framework construction was to establish clear direction for implementation via purpose, objectives, and research questions.

Phase 3 - Implementation

The implementation phase of the EvAR is the most intensively covered in this chapter. It is during this phase that purposes and benefits (justification) for the choice of a specific research evaluation can be outlined. Further, at this phase, detail of the constituting elements describing how the research evaluation will be conducted is noted (as summarised in Figure 4.2) in the framework. This section of the chapter covers description of the constituting elements of the EvAR, alongside illustration with application to the ESAR.

Purpose, objectives, and research questions

Guthrie et al. (2013) noted that the ‘design of a framework should depend on the purpose of the evaluation’ (ix). These authors described the purposes of research evaluation as (with our interpretation):

  • • Advocacy (demonstrating benefits, enhancing understanding of the research process among policy-makers and the public, and making a case for change/improvement);
  • • Accountability (showing efficiency of use of resources within research);
  • • Analysis and learning (demonstrating how and why research is effective, and how it can be better supported); and/or
  • • Allocation (determining where and how best to allocate resources in the future).

Such purposes, in turn, are linked to whether an evaluation intent is formative (ongoing and learning, developmental) or summative (endpoint and

62 Eileen Piggot-Irvine and Deborah Zornes

Constituting elements in the EvAR framework

Figure 4.2 Constituting elements in the EvAR framework

accountability oriented) as summarised in Piggot-Irvine and Bartlett (2008). Guthrie et al. (2013) stressed that purposes have to be clear from the outset because many other framework decisions are linked to those purposes.

Further, Aberatne (2010), and more specifically Durlak and DuPre (2008), have made a solid case for the need to understand purposes and process implementation in evaluating outcomes. For example, Durlak and DuPre (2008), in their own research, asked: ‘1) Does implementation affect outcomes?; and 2) What factors affect implementation?’ (328).

In the ESAR project, Guthrie et al.’s (2013) primary purposes of advocacy, and analysis and learning predominated. Advocacy urn strong because we wanted to demonstrate benefits, effective processes, and improvement impacts of AR. Analysis and learning also dominated as purposes due to our intent to showcase how and why AR led to different types of impacts. Both purposes have formative intent, but because we wanted to evaluate the efficiency of resource use within AR projects we studied, there was also a secondary accountability (summative) purpose.

Purpose decisions led to clarification of the overall objective for the ESAR as: to explore, via an examination of process and outcomes of approximately 100 AR projects implemented in varied contexts globally, whether and how the often-touted espousals of individual, community, organisational, and/or societal impact ofAR are actually realised, and to advance knowledge and understanding of the elements of AR enhancing outputs, outcomes, and impact.

Further focus in the ESAR was articulated through the clarification of the key research question:

In what ways can AR he validated as a contributor to meaningful individual, community, organisational, and societal change?

The overall objective and question show that the ESAR had a focus on both process and outcomes. Findings were also intended to provide clarity about validity claims for AR as an approach to change. Additionally, more general outcomes associated with advancing knowledge were hoped for from the ESAR. These outcomes included: building on current research from Piggot-Irvine and Bartlett (2008) on evaluation ofAR, establishing evaluative indicators for AR, and creating a publicly accessible AR repository as a directory for AR project reports and research findings.

Establishing benefits

Intended benefits of any study should be strongly articulated in a research evaluation framework (Guthrie et al. 2013; Hemlin and Rasmussen 2006; de Jong, van Arensbergen, Daemen, van der Meulen, and van den Besselaar 2011; Klein 2006,2008; Spaapen, Dijstelbloem, and Wamelink 2007; Spaapen and Van Drooge 2011). Such benefits can be articulated as justification for a research evaluation study.

Key benefits of the ESAR included that: it was conducted in multicontextual, nonacademic, communities (e.g., health, sport, development aid, education, agriculture, environmental, management, and leadership, to name but a few); and findings of the ESAR study were to be of interest to a variety of disciplines (sometimes transdisciplinary), academic fields, and research areas such as philosophy, sociology, science, arts, etc. A further benefit was reported as enhanced AR credibility. We believed that the current perception of limited impact of AR was substantially due to the minimal examination of outputs, outcomes, and impact. In our framework, we recorded that the ESAR findings could not only address this limitation but also add recommendations on processes that enhance effective outcomes for action researchers. If outcomes, outputs, and impact were validated, there could be reduction of criticism of low credibility ofAR. We stated in our benefits section of the EvAR framework that, at the least, recommendations for improved AR process/ practice could be established to demonstrate how AR might be designed to genuinely create thinking and behaviour leading to improvements in economic, social, cultural, and intellectual well-being.

Indicator establishment

Guthrie et al. (2013) emphasised that a framework ‘requires careful selection of units of aggregation for the collection, analysis and reporting of data’ (x).

Units of aggregation are most often referred to as indicators. Indicators can be discussed from varying perspectives, including scope (methods, dimensions of indicators) and establishment (extent of collaboration in development, etc.).

In terms of scope, Penfield et al. (2014) offered specifically that in data collection methods, there should be a focus on metrics, narratives, surveys, and citations (within and outside of academia) as indicators for evaluating the success of research. A broader, dimensions-oriented emphasis proposed by Wickson and Carew (2014) included that indicators should focus on whether a project/ research is/was: socially relevant and solutions oriented, sustainability and future scanning, diverse and deliberative, reflexive and responsive, rigorous and robust, creative and elegant, and honest and accountable (261). Jahn and Keil (2015) noted similar dimensions focusing on the quality of the: research problem (considering different spatial, temporal, and social scales), research process (level of integration and epistemic, social-organisation, and communicative levels), and research results (maintaining the viability of society, and the attention to current and future issues of justice) (198).

There has, however, been growing acknowledgement that traditional biblio- metrics (including citations, number of patents, licences, spin-off firms, revenue generated, etc.) are insufficient for measuring the impact of research (Association of Universities and Colleges of Canada [AUCC] 2008; Butler 2008; Donovan 2005, 2009; Duryea, Hochman, and Parfitt 2007; Rasmussen 2008). Donovan (2005) instead suggested including non-bibliometric indicators. Those with relevance to the EvAR include:

honours and awards, election to and roles within learned societies, journal editing, editorial board membership, editing special issues of journals, special journal editions dedicated to one’s research, invited lectures at conferences (particularly keynote addresses), organising conferences or workshops, activities in providing academic advice (e.g., assessing research applications, manuscript refereeing, supervision and examination of PhD theses), contributions to dissemination/ popularization of research in the media, policy preparation research ... visiting professorships or fellowships and conferences dedicated to specific research.


The approach adopted for establishing indicators is possibly as, if not more, important than scope in a framework. Further, Defila and Di Giulio (1999); Huutoniemi (2010); Huutoniemi and Tapio (2014); and Spaapen and van Drooge (2011) all emphasised the importance of joint development of indicators by the researchers and stakeholders involved.

In the ESAR, we were mindful о/AR as a complex system and that in such systems, outcomes and end states are not known with any degree of certainty, only probability. We drew upon the work of multiple authors who had established indicators with any relevance toAR. Included were ideas from: Bryman and Bell’s (2015) indicators for authenticity; Meyer's (2000) consideration of change and knowledge; Piggot-hvine’s (2008) indicators for meta-evaluating action research; Earl, Carden, and Smutylo’s (2001) definition of outcomes from change; and Wadsworth’s (2011) indicators for success. We favoured indicators that not only evaluated the quality of outcomes of the AR project but also the extent to which the project made a difference and a difference that is ongoingthe sort of sustainability referred to by Wickson and Carew (2014) andjahn and Keil (2015). We were particularly conscious of the fact that a very wide range of impacts could be associated with projects in the ESAR and our categorisation of indicators would likely be complex and extensive. Care was also taken to ensure the indicators could be easily analysed, given our mixed method design.

For indicator organisation, we developed sub-sections of ‘Precursors/Preconditions’, ‘Process and Activities’, and ‘Post Action Research Outputs, Outcomes and Impacts’. The organisation of indicators formed part of a conceptual explanatory model which was based on a logic model (Kellogg Foundation 2001) showing a research to impact progression (for a comprehensive outline, see Piggot-Irvine, Rowe, and Ferkins 2015).

The early indicator establishment and confirmation task developed for the ESAR was probably the most intensive and time consuming of all activities in our framework construction. We upheld the inclusive orientation through our commitment to, and fierce enactment of the collaborative and dialogical intent qfAR. We spent months jointly creating the indicators and then over a year extensively seeking feedback on these from the wider AR community. We believe that the time spent was invaluable in creating clarity for development of data collection tools, analysis, and subsequent reporting.

Participants and stakeholder engagement

Defining who will respond and participate in a research evaluation is the next constitutional element of a framework. In the EvAR, we consider that determination of selection criteria is an important step prior to participant selection. Criteria establishment is usually followed by sampling and choice of participants which, as in all research, is strongly linked to method selection, with the latter often preceding the former. In the EvAR framework, because collaboration and ownership are valued highly (Cardno 2003; Greenwood and Levin 2006; Reason and Bradbury 2001; Stringer 2014), recording the approach to participant selection takes on even greater significance.

We articulated in the ESAR that we were evaluating multiple projects at a meta level, so the definition of participants also included projects. We established the selection criteria as projects having: 1) clear articulation asAR (including participatory AR); 2) a change emphasis arising out of an issue, concern, or need; 3) articulation of espousal oj improvement or capacity building which may have been, in turn, linked to goals of personal, team, organisation, or society improvement; 4) the usual characteristics of collaboration and iterative phases of action and reflection; 5) outcomes of publication or reporting dissemination post-2008; and 6) availability of a project lead and other team members and stakeholders (i.e., all boundary partners).

Over 100 projects from several countries and varied contexts met the criteria. No sampling was required other than meeting the criteria. Similarly, because all project team members were included in a large-scale online survey in the ESAR, no sampling within projects urn involved. The case studies examined in the ESAR, however, were purposefully (Adams, Khan, Raeside, and White 2007) selected because we drew upon projects that were able to be accessed with relative ease and in reasonably close proximity to the research team.

Clarity in a framework is also needed about whether (and which) boundary partners or stakeholders will be involved. The importance of including stakeholders in research is reinforced by Bergmann et al. (2005); Spaapen et al. (2007); Mitchell and Willetts (2009); Carew and Wickson (2010); Smuddel and Courtright (2011);Tremblay and Hall (2014); and Dick et al. (2015). As Tremblay and Hall (2014) suggested, impactful knowledge creation and mobilisation occurs when communities and stakeholders are authentically engaged. Wickson and Carew (2014) also emphasised the importance of stakeholder inclusion in their proposed four central characteristics in responsible research and innovation (RRI), the second of which is ‘a commitment to actively engaging a range of stakeholders for the purpose of substantively better decision-making and mutual learning’ (255). Further, Ackerman and Anderson (2010) noted the importance of‘identifying who the stakeholders really are in the specific situation ... exploring the impact of stakeholders’ dynamics . .. and development stakeholder management strategies’ (180) as being critical to the success of an endeavour. As Phillipson, Lowe, Proctor, and Ruto (2012) suggested,‘effective research uptake in policy and practice may be built upon a foundation of active knowledge exchange and stakeholder engagement during the process of knowledge production itself’ (57).

Once a decision to engage participants is decided upon, then clarifying how they will be involved is also critical. There are many research studies which claim to include or involve communities, and indeed start out with the premise that all stakeholders are equal. Somewhere along the way, however, these studies often deteriorate to a ‘researcher knows best’ model, where participants, subjects, and stakeholders are secondary (Zornes 2012). As we reported in Zornes, Ferkins, and Piggot-Irvine (2015,6),

in AR, building relationships is a defining feature and their importance is paramount. The relationships developed, in turn, spawn a variety of networks, among teams, with stakeholders, and with the larger community.... Stakeholders, often referred to as boundary partners, can, and should, include a wide cross-section of individuals and partnerships.


Piggot-Irvine (2012b) identified preconditions necessary for how to engage participants when she stated that collaboration needed to include shared goals and language alongside a desire to participate and openness. Such collaboration creates an outcome of trust and trust, in turn, is ‘the lubricant that makes cooperation possible between these actors and higher levels of trust are believed to lead to increasing network effectiveness’ (Popp, Milward, Mackean, Casebeer, and Lindstrom 2014, 10).

In the USA R project, participants were involved throughout the research process whenever possible. The emphasis was on creating an authentic collaborative relationship with participants. A pre-requisite for creating such collaboration with others was to establish our own research team approach to collaboration and engagement at the initial stages of the study and to model this throughout the study. Further, to enhance engagement with participants, we focused on methods for data collection which aimed to enhance dialogue, including focus groups and Goal Attainment Scaling (GAS). We also committed to quickly sharing findings with participants in order to clarify our interpretations and enhance ownership of recommendations for change. Our goal was to ensure that all boundary partners had ownership of any improvements implied in the findings. Ownership, in our eyes, could only be assured if those who led the organisations and communities impacted by theAR projects we studied were involved as early as possible in confirmation of indicators, as well as discussion of findings and recommendations (the latter point was also particularly emphasised by Brown and Isaacs /2005]).

Methodology and methods

All frameworks usually include a description of the overarching methodology and methods employed. The importance of transparency and systematicity in the description has been noted by Meyrick (2006). As Meyrick (2006) pointed out, a framework should ‘communicate enough knowledge about the process to enable readers to make a value judgement about rigour and quality’ (804). Knowledge includes ensuring there are clear details regarding what data is to be collected as well as how the collection will occur.

In the EvAR framework, we have encouraged extension of the usual AR multimethod approach for enhancing data credibility (Yin 2003) to a mixed method methodology (Creswell 2009; Ivankova, Creswell, and Stick 2006; Ivankova 2015) falling under triangulation and convergence typologies (Creswell and Plano Clark 2017). The indicators are used to inform construction of the methods employed for data collection within the mixed method methodology. We accepted Creswell’s (2009) interpretation of mixed methods where the Qn component employs statistical formulae ‘so that numbered data can be analysed using statistical procedures’ (4). We add that Qn combined with Ql offers a more holistic understanding of perceptions. The rationale for using mixed methods methodology in the EvAR has been based on the assumptions that: Qn or Ql alone is insufficient, Qn and Ql complement each other, and such a mix allows for more robust analysis (Youngs and Piggot-Irvine 2014). In the EvAR framework, the methodology with wide choice of method selection aligned with the intent of AR for flexibility and responsiveness.

In the ESAR, we considered that a mixed method methodology would meet our need to add insight and understanding while recognising the influence of context and perception alongside identifying strength of relationships between indicators and the ability to generalise findings. An electronic survey was first piloted by five experienced action researchers who were not participants in the study. Findings from the pilot survey helped inform tool development for seven pilot case studies (which later became the seven further full case studies) employing documentary analysis, further surveys, focus groups, semi-structured interviews, and GAS as data sources. Almost all of these methods are well known, with the exception of GAS. Molyneux et al. (2012); Latham and Locke (2006); and Roach and Elliott (2005) provided detail on GAS, but briefly, it is a tool for ranking and quantifying indicators. Further papers on the methodology and methods are forthcoming.

Data analysts

Following (and connected to) indicator, methodology, and method decisions in a research evaluation framework is the determination of data analysis techniques. Meyrick (2006) noted the importance of this element at the end of components of study conduct. However, as recommended by Baxter and Jack (2008), in the EvAR, we favoured data collection and analysis occurring concurrently where feasible.

For the ESAR, we recorded that we would closely link our indicators to analysis and ensure that both Qn statistical analysis and Ql thematic analysis could be carried out, given our mixed method design. For Qn analysis, we converted many of the indicators into questions for the survey which had a five-point Likert scale indicating levels of agreement. Qn survey data analysis was then conducted using the online programme Fluid Survey with the results imported into SPSS 13. We employed scale analyses of both discrete (logistic regression) and continuous data (multiple regression) to show associational/causal analyses. The latter was designed to enable derivation of relationships between preconditions, process, and post-project outcomes and impacts. From here, degree of satisfaction or completeness of reaching project espousals was used to identify the strength of relationship, or a predictive path for future studies. In this way, we could propose that if condition X was in place, the impact on project success was more likely to be Y. We also acknowledged that while causal analysis is not a fundamental part qfAR, we considered that the Qn analysis based on prediction models of outcomes was an important component of the ESAR.

We were cognisant that the specific sustainable change outcomes and impacts could vary wildly from context to context in AR projects, and therefore decided the collection of Ql data would be more appropriate for these components. Varied analysis tools were employed for ESAR Ql data including descriptive and thematic analysis (using predetermined coding criteria and XVI VO software) of: the existing documentation on case studies, open-ended survey responses, interview transcripts, and focus group responses. We looked for patterns, linkages, explanations, and synthesis of ideas in this analysis. Our intent was akin to that of Srivastava and Hopwood (2009), to ‘provide the best explanation of what’s going on’ (77).

Overall in our analysis, the convergence type of mixed method methodology (Creswell and Plano Clark 2017) enabled us to compare the findings from both Qn and Ql analysis in order to understand an overall case. Member checking was intended with all Qn and Ql data.

Phase 4 - Review of achievement

In the review of achievement phase in the EvAR, we noted the importance, at a meta level, of gathering data on effectiveness of the evaluation conducted. Such a phase is infrequently mentioned in other frameworks, yet it is widely noted as a vital component of AR. In the EvAR, we have encouraged review on three areas that are loosely derived from Coghlan and Brannick’s (2014) thinking: premise (reflection on underlying assumptions and perspectives, whether unstated or even unconscious, content (reflection on what was constructed or planned), and process (reflection on how it was implemented and evaluated). We believe that reflexivity has to be at the core of this meta-level reflective review with evaluators consciously attempting to question their own actions and thoughts, reflecting upon what, why, and how they have been doing things (as supported byTolich and Davidson 2011).

In the ESAR, we continuously recorded own perspectives on the effectiveness of our implementation of the EvAR framework elements. Such meta-reflection was critical to us as action researchers, fust as one example, we appointed one ESAR research team member to coordinate the recording of reflections among the group at the end of every team meeting over the period of three years, and these reflections are currently being summarised as part of our reporting phase. Further, because authentic collaboration with stakeholders was critical for the study, in all interactions we consciously attempted to question our own actions and thoughts, reflecting upon what, why, and how we were doing things by seeking feedback continuously from the AR community.

Phase 5 - Report achievements, recommendations, and knowledge mobilisation

Generally, the approach to communicating findings is planned early in research and elaborated in the research evaluation framework.The communication plan in the framework needs to: consider where, how, and from whom attention is sought; be attention grabbing by clarifying what is distinctive, or what problem is being explored and why it is important; clarify the context of the message; be open about the beliefs and purpose of the research and the beliefs of those receiving the message; explain the effort that has gone into the research; clarify how dialogue can be created around the topic; employ multiple channels for delivering the message; and note whether the results can be shown to be trustworthy (Minto 2009). Both logical and rational messages, as well as emotional ones, should be considered. Increasingly, in social science research, opportunities to ‘tell the story’ (AUCC 2008) are outlined in a knowledge mobilisation plan.

If collaborative activity' is featured in a framework, the following caution about communication from Boyd, Buizer, Schibeci, and Baudains (2015) might be considered:

in spite of ubiquitous participation-rhetoric, in the ways that researchers perform communication about their projects, a common normalized expression like ‘knowledge transfer’ implies a role for the researcher as the ‘holder’ of knowledge, and a role for publics as receivers of knowledge. Also, in this view of knowledge as something that can be transferred, knowledge is sitting out there, waiting to be discovered and distributed, rather than

being relational, and evolving in interaction between different actors.


In the ESAR, the approach to early and continuous - rather than just endpoint — reporting and knowledge mobilisation was articulated. We noted that the study itself reflected knowledge mobilisation at two levels. First, the collaboration among researchers was designed to ensure enhanced accessibility, flow, and exchange of knowledge. Second, the dialogical approach to engaging respondent/participant input was deliberate in terms of practitioner-researcher-organisation-community flow of information and enhancement of ownership of improvement rather than us ‘holding’ the knowledge. As a result of the collaboration and the dialogical approach to engagement, we have already been able to report that varied levels of networks have developed (Zornes et al. 2015), including those: among the ESAR research team, within each of the individual projects studied, between and among the leads and participants of projects in the study, and in the larger AR community outside of the team and project participants.

We also articulated how results from the ESAR on validity claims for AR as an approach to change could be reported out to a wide audience via journals, books, conferences, and forums such as press releases and non-academic media (newsletters, podcasts, listservs, webinars, and blogs). Further, we noted the specific annual research dissemination workshops we could offer to our respective universities. Knowledge mobilisation was also planned to occur through incorporation of process and findings in curriculum material for postgraduate courses taught, and theses supervised, by members of the research team. We have already met a considerable number of our planned targets for mobilisation, with multiple journal articles, conference presentations, and workshops presented.

Phase 6 - continued action for improvement

We found no mention in traditional frameworks of evaluators encouraging enhanced or ongoing actions for improvement associated with the activity of the evaluators themselves, the process steps employed in the framework, or the context evaluated. Like AR, at almost every stage of the EvAR, there are implications and expectations of continuing action for improvement. For example, the preparatory phase activities associated with establishing values and protocols have been designed as a guide for the evaluation team to continuously improve their own practice. In the reconnaissance phase, the probing literature review has not been articulated as a one-off early evaluative activity, but rather the literature could be continuously updated throughout the research evaluation. In the implementation phase, the flexibility underpinning ofAR has been deliberately inserted as part of the framework to create an imperative for ongoing, iterative, piloting, checking, reviewing, and updating of all the constituting elements of the framework. Reflection and reflexivity have been stipulated as central features of the EvAR, and the continuation arrow shown in Figure 4.1 indicates that ongoing action is likely to result.

All of the noted ongoing actions for improvement occurred in the ESAR project, and we have discussed many of those actions in previous sections of this chapter. In particular, flexibility has been strongly present in our emphasis on: communicating with, and seeking feedback from, stakeholders in order to improve our approaches; continually piloting and updating methods; employment of reflective and reflexive processes for checking progress and creating new thinking; openly mobilising and sharing our findings throughout the study; and actively encouraging dialogue about findings. This chapter is an example of the latter.


In this chapter, we have presented EvAR as a research evaluation framework that has the flexibility to meet the challenge of the complex needs of evaluation of a large applied research study. The framework has been designed to meet a dual overarching need to engage stakeholders in a responsive, improvement- oriented evaluation process and to be responsive to increasing accountability demands to track, demonstrate, and measure the impacts and outcomes of research (Conteh 2013; Giroux and Giroux 2009; Popp et al. 2014).

We have argued that a flexible model such as EvAR could ensure the respon- sivity required of the varied boundary partners in a context such as the ESAR. Further, we have suggested that a framework underpinned by values and principles of AR (Greenwood and Levin 2006; Reason and Bradbury 2001; Stringer 2014) could align well with a context of research evaluation where engagement and ownership of findings is important. The authentic collaboration approach (Piggot-lrvine 2012b) based on non-defensive (Argyris 2003) dialogical strategies is central to such engagement and genuinely open interactions. We hope that we have demonstrated that the collaborative approach must also be deeply embedded within the practice of the research evaluators themselves, and we have attempted to illustrate such practice within the ESAR.

In keeping with Phase 6 of the EvAR we have written this chapter to not only share our thinking but, most importantly, to invite response. We welcome your input.


Aberatne, A. M. 2010. “Learning History of COMPAS Sri Lanka, Using the Most Significant Change Stories Tool, PSO & DPRN.” Retrieved from Learning%20history%20ETC%20COMPAS%202010.pdf Ackerman, A. L., and D. Anderson. 2010. Awake at the Wlteel: Moving Beyond Change Management to Conscious Change Leadership. Durango, CO. Retrieved from resource- center/pdf/SR_AwakeAtTheWheel_v3_101006.pdf Adams,J., H. Khan, R. Raeside, and D.White. 2007. Research Methods for Graduate Business and Social Science Students. London: Sage Publications. ISBN 9780761935896 Argyris, C. 2003. “A Life Full of Learning.” Organization Studies 24 (7): 1178-1192. http://

Association of Universities and Colleges of Canada. 2008. Momentum: The 2008 Report on University Research and Knowledge Mobilization. Retrieved from http://doi-org-443.

Axelrod, P. 2002. Values in ConfHct:The University, the Marketplace, and theTrials of Liberal Education. Montreal: McGill-Queen s Press-MQUP.

Baxter, R, and S.Jack. 2008.“Qualitative Case Study Methodology: Study Design and Implementation for Novice Researchers.” The Qualitative Report 13 (4): 544—559.

Bergmann, M., B. Brohntann, E. Hoffmann, M. C. Loibl, R. Rehaag, E. Schramm, and J.VoB. 2005.“Quality Criteria ofTransdisciplinary Research.”/! Guide for the Formative Evaluation of Research Projects: ISOE-Studientexte: 13.

Boyd, D., M. Buizer, R. Schibeci, and C. Baudains. 2015. “Prompting Transdisciplinary Research: Promising Futures for Using the Performance Metaphor in Research.” Futures 65: 175-184.

Brooker, R., and I. Macpherson. 1999. “Communicating the Processes and Outcomes of Practitioner Research: An Opportunity for Self-indulgence or a Serious Professional Responsibility?” Educational Action Research 7 (2): 207-221. 09650799900200091

Brown, J., and D. Isaacs. 2005. The World Cafe: Shaping Our Futures Through Conversations That Matter. San Francisco, CA: Berrett-Koehler Publishers.

Bryman, A., and E. Bell. 2015. Business Research Methods. New York: Oxford University Press.

Butler, L. 2008. “Using a Balanced Approach to Bibliometrics: Quantitative Performance Measures in the Australian Research Quality Framework.” Ethics in Science and Environmental Politics 8 (1): 83-92.

Buxton, M., and S. Hanney. 1996. “How Can Payback from Health Services Research Be Assessed?”Journal of Health Services Research & Policy 1 (1): 35-43.

Buxton, M., S. Hanney, and T. Jones. 2004. “Estimating the Economic Value to Societies of the Impact of Health Research: A Critical Review.” Bulletin of the World Health Organization 82: 733-739.

Cardno, C. 2003. Action Research: A Developmental Approach. Wellington, New Zealand: Council for Educational Research.

Carew, A. L., and F. Wickson. 2010. “The TD Wheel: A Heuristic to Shape, Support and Evaluate Transdisciplinary Research.” Futures 42 (10): 1146-1155.

Carroll, W. K. 2003. “Undoing the End of History: Canada-Centred Reflections on the Challenge of Globalization.” In Global Shaping and Its Alternatives, edited byY. Atasoy and W. K. Carroll, 33-55. Ontario: Garantond Press Ltd.

Coghlan, D., and T. Brannick. 2014. Doing Action Research in Your Own Organization. 4th ed. Thousand Oaks, CA: Sage Publications.

Conteh, C. 2013. “Strategic Inter-Organisational Cooperation in Complex Environments.” Public Management Review 15 (4): 501-521.

Creswell, J.W., 2009. Research Design Qualitative, Quantitative, and Mixed Methods Approaches. 3rd ed. Thousand Oaks, CA: Sage Publications.

Creswell,J.W., and V. L. Plano Clark. 2017. Designing and Conducting Mixed Methods Research. Thousand Oaks, CA: Sage Publications.

D’Arcy, P. 1994. “On Becoming an Action Researcher—Who Qualifies? Plus Ca Change." Action Researcher 1: 1-3.

Davison, R., M. G. Martinsons, and N. Kock. 2004. “Principles of Canonical Action Research.” Information Systems Journal 14 (1): 65-86.

Defila, R., and A. Di Giulio. 1999. Evaluating Transdisciplinary Research, Newsletter of the Swiss Priority Programme Environment. Berne: Swiss National Science Foundation.

Dehler, G. E., and R. K. Edmonds. 2006. “Using Action Research to Connect Practice to Learning: A Course Project for Working Management Students.” Journal of Management Education 30 (5): 636-669. doi:10.1177/1052562905277302

De Jong, S. P. L., P.Van Arensbergen, F. Daemen, B.Van Der Meulen, and P.Van Den Besselaar. 2011. “Evaluation of Research in Context: An Approach and Two Cases.” Research Evaluation 20 (1): 61-72.

Dick, B. 2004. “Action Research Literature: Themes and Trends.” Action Research 2 (4): 425-444.

Dick, B., S. Sankaran, K. Shaw.J. Kelly, J. Soar, A. Davies, and A. Banbury. 2015. “Value Cocreation with Stakeholders Using Action Research as a Meta-Methodology in a Funded Research Project.” Project Management Journal 46 (2): 36-46.

Donovan, C. 2009. “Visible Gains from Research.” The Australia: 1-2.

Donovan, C. 2005. Setting the Scene: A Review of Current Australian and International Practice in Measuring the Quality and Impact of Publicly-Funded HASS Research (Interim Report). Canberra: Australian National University.

Donovan, C., and S. Hanney. 2011. “The ‘Payback Framework’ Explained.” Research Evaluation 20 (3): 181-183.

Durlak, J. A., and E. P. DuPre. 2008. “Implementation Matters: A Review of Research on the Influence of Implementation on Program Outcomes and the Factors Affecting Implementation.” American Journal of Community Psychology 41 (3-4): 327-350.

Duryea, M., M. Flochman, and A. Parfitt. 2007.“Measuring the Impact of Research.” Research Clobal 1:8-9.

Earl, S., F. Carden, and T. Smutylo. 2001. Outcome Mapping: Building Learning and Reflection into Development Programs. Ottawa: International Development Research Center.

Giroux, H. A., and S. Searls Giroux. 2009. “Beyond Bailouts: On the Politics of Education After Neoliberalism.” Policy Futures in Education 7 (1): 1—4.

Giroux, H. A., and S. Searls Giroux. 2006a. Take Back Higher Education. New York, NY: Palgrave.

Giroux, H. A., and S. Searls Giroux. 2006b. “Challenging Neoliberalisms New World Order: The Promise of Critical Pedagogy.” Cultural Studies? Critical Methodologies 6 (1): 21-32.

Gosling,J., and H. Mintzberg. 2006. “Management Education as if Both Matter.” Management Learning 37 (4): 419-428.

Greenwood, D. J. 2014. “Pragmatic Action Research.” In Sage Encyclopedia of Action Research, edited by D. Coghlan and M. Brydon-Miller, 645-648. London: Sage Publications, http://

Greenwood, D.J., and M. Levin. 2006. Introduction to Action Research: Social Research for Social C/iitnge.Thousand Oaks, CA: Sage Publications.

Guthrie, S.,W. Wantae, S. Diepeveen, S. Wooding, and J. Grant. 2013. Measuring Research: A Guide to Research Evaluation Frameworks and Tools. Santa Monica, CA: RAND, Europe, MG-1217-AAMC.

Hemlin, S., and S. Barlebo Rasmussen. 2006. “The Shift in Academic Quality Control.” Science, Technology, & Human Values 31 (2): 173-198.

Huutoniemi, K. 2010.“Evaluating Interdisciplinary Research.” The Oxford Handbook of Interdisciplinarity: 309—320.

Ivankova, N.V. 2015. Mixed Methods Applications in Action Research.Thousand Oaks, CA: Sage Publications.

Ivankova, N. V, ). W. Creswell, and S. L. Stick. 2006. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (1): 3-20. doi: 10.1177/1525822X05282260

Jahn.T., and F. Keil. 2015. “An Actor-Specific Guideline for Quality Assurance in Transdisci- plinary Research.” Futures 65:195-208.

Kellogg Foundation. 2001. W.K. Kellogg Foundation Logic Model Development Guide. Retrieved February 10 from

Kemmis, S. 2010 “What Is To Be Done? The Place of Action Research.” Educational Action Research 18 (4): 417-427.

Kemmis, S., and R. McTaggart. 1988. The Action Research Reader. Geelong, Victoria: Deakin University Press.

Klein, J.T. 2008. “Evaluation of Interdisciplinary andTransdisciplinary Research:A Literature Review.” American Journal of Preventive Medicine 35 (2): SI 16-S123.

Klein,J.T. 2006.“Afterword:The Emergent Literature on Interdisciplinary andTransdisciplinary Research Evaluation.” Research Evaluation 15 (1): 75-80.

Koshy, E.,V. Koshy, and H. Waterman. 2011. Action Research in Healthcare. London: Sage Publications.

Kurtz, С. E, and D. J. Snowden. 2003. “The New Dynamics of Strategy: Sense-Making in a Complex and Complicated World.” IBM Systems Journal 42 (3): 462-483.

Latham, G. P., and E. A. Locke. 2006. “Enhancing the Benefits and Overcoming the Pitfalls of Goal Setting.” Organisational Dynamics 35 (4): 332-340.

McNiff, 1.1988. Action Research: Principles and Practice. Hampshire: McMillan Education Ltd.

Metcalfe, M. 2008. “Pragmatic Inquiry.” Journal of the Operational Research Society 59 (8): 1091-1099.

Meyer, J. 2000. “Evaluating Action Research.” Age & Ageing 29 (2): 8-10.

Meyrick. J. 2006. “What Is Good Qualitative Research? A First Step Towards a Comprehensive Approach to Judging Rigour/Quality.” Journal of Health Psychology 11 (5): 799-808.

Minto, B. 2009. The Pyramid Principle: Logic in Writing andThinking. Harlow: Pearson Education.

Mitchell, C., and J. Willetts. 2009. Quality Criteria for Inter- and Trans-Disdplinary Doctoral Research Outcomes. Prepared for ALTC Fellowship: Zen and the Art of Transdisciplinary Postgraduate Studies. Sydney, Australia: Institute for Sustainable Futures, University of Technology.

Molyneux, C., N. Koo, E. Piggot-Irvine,A.Talmage, R.Travaglia, and M.Willis. 2012.“Doing It Together: Collaborative Research on Goal-Setting and Review in a Music Therapy Centre.” New Zealand Journal of Music Therapy 10: (>-38.

Parker,J., and E.VanTeijlingen. 2012.“The Research Excellence Framework (REF): Assessing the Impact of Social Work Research on Society'.” Practice 24 (1): 41—52.

Penfield, T, M. J. Baker, R. Scoble, and M. C. Wykes. 2014. “Assessment, Evaluations, and Definitions of Research Impact: A Review.” Research Evaluation 23 (1): 21-32.

Phillipson.J., P. Lowe, A. Proctor, and E. Ruto. 2012. “Stakeholder Engagement and Knowledge Exchange in Environmental Research.” Journal of Environmental Management 95 (1): 56—65.

Piggot-Irvine, E. 2012a. “Evaluation Using a Collaborative Action Research Approach.” Presentation to British Columbia Canadian Evaluation Society' AGM,Vancouver, October.

Piggot-Irvine, E. 2012b. “Creating Authentic Collaboration: A Central Feature of Effectiveness.” In Action Research for Sustainable Development in a Turbulent World, edited by O. Zuber- Skerritt, 89-107. Bingley: Emerald.

Piggot-Irvine, E., and B. Bartlett. 2008. Evaluating Action Research. Wellington, NZ: NZCER Press.

Piggot-Irvine, E., D. Connelly', R. Curry, J. Hanna, M. Moodie, M. Palmer, D. Peri, and A. Thompson. 2011. “Building Leadership Capacity-Sustainable Leadership.” Action Research Learning Association (ALARA) Monograph Series 2: 1—40.

Piggot-Irvine, E.,W. Rowe, and L. Ferkins. 2015. “Conceptualizing Indicator Domains for Evaluating Action Research.” Educational Action Research 23 (4): 545—566. doi:10.1080/09 650792.2015.1042984

Popp, J., H. B. Milward, G. MacKean, A. Casebeer, and R. Lindstrom. 2014. Inter- Organisational Networks: A Review of the Literature to Inform Practice. Washington, DC: IBM Center for the Business of Government.

Preskill, H., and R.T. Torres. 1999. Evaluative Inquiry for Learning in Organizations. Thousand Oaks, CA: Sage Publications.

Rasmussen, E. 2008. “Government Instruments to Support the Commercialization of University Research: Lessons from Canada.” Technovation 28 (8): 506—517.

Reason, R, and H. Bradbury, eds. 2001. Handbook of Action Research: Participative Inquiry and Practice.Thousand Oaks, CA: Sage Publications.

Roach, A. T., and S. N. Elliott. 2005. “Goal Attainment Scaling: An Efficient and Effective Approach to Monitoring Student Progress.” Teaching Exceptional Children 37 (4): 8-17.

Rowe, W. E., M. Graf, N. Agger-Gupta, E. Piggot-Irvine, and B. Harris. 2013. “Action Research Engagement: Creating the Foundation for Organisational Change.” ALARA Inc Monograph Series 5. Action Learning, Action Research Association, Inc. Retrieved from Creating_the_Foundations_for_Organisationa!_Change

Sankaran, S., B. H.Tay, and M. Orr. 2009.“Managing Organisational Change by Using Soft Systems Thinking in Action Research Projects.” International Journal of Managing Projects in Business 2 (2): 179-197. http://dx.doi.Org/10.l 108/17538370910949257

Schon, D. A. 1984. The Reflective Practitioner: How Professionals Think in Action.V61. 5126. New York, NY: Basic Books.

Smuddel, P. M., and J. L. Courtright. 2011. “A Holistic Approach to Stakeholder Management: A Rhetorical Foundation.” Public Relations Review 37 (2): 137-144.

Spaapen.J., H. Dijstelbloem, and F.Wamelink. 2007. Evaluating Research in Context:A Method for Comprehensive Assessment. Netherlands: Consultative Committee of Sector Councils for Research and Development (COS). Retrieved from user_upload/qualitaetssicherung/PDF/Weitere_Aktivit%C3%A4ten/Eric.pdf

Spaapen.J., and L.Van Drooge. 2011.“Introducing‘Productive Interactions’ in Social Impact Assessment.” Research Evaluation 20 (3): 211-218.

Srivastava, P., and N. Hopwood. 2009. “A Practical Iterative Framework for Qualitative Data Analysis.” International Journal of Qualitative Methods 8 (1): 76-84.

Stringer, E.T. 2014. Action Research. 4th ed.Thousand Oaks, CA: Sage Publications.

Tolich, M., and C. Davidson. 2011. Celling Started:An Introduction to Research Methods. Auckland: Pearson.

Tremblay, C., and B. L. Hall. 2014. “Learning from Community-University Research Partnerships: A Canadian Study on Community Impact and Conditions for Success.” International Journal of Action Research 10 (3).

Tripp, D. H. 1990. “Socially Critical Action Research.” Theory Into Practice 29 (3): 158-166.

Wadsworth, Y. 2011. Building in Research and Evaluation: Human Inquiry for Living Systems. Walnut Creek, CA: Left Coast Press.

Wadsworth,Y. 1998.“What Is Participatory Action Research?” Retrieved from http://elmo.

Wicks, P. G., and P. Reason. 2009.“Initiating Action Research: Challenges and Paradoxes of Opening Communicative Space.” Action Research: 243—262.

Wickson, EA., and A. L. Carew, 2014. “Quality Criteria and Indicators for Responsible Research and Innovation: Learning from Transdisciplinarity.” Journal of Responsible Innovation 1 (3): 254-273.

Winter, R. 1987. Action-Research and the Nature of Social Inquiry: Professional Innovation and Educational lYork. Farnham: Ashgate Publishing Company.

Yin, R. K. 2003. Case Study Research Design and Methods. 3rd ed.Thousand Oaks, CA: Sage Publications.

Youngs, H., and E. Piggot-Irvine. 2014. “The Merits of Triangulation: The Evaluation of a New Zealand School Leadership Development Program Using Mixed Methods Research.” Method in Action Case Studies: SACE Research Methods Cases, http://dx.doi. org/10.1177/1558689811420696

Zornes, D. 2012. The Business ofThe University: Research, Its Place in the ‘Business’, and the Role of the University in Society. Doctoral dissertation. Victoria of University, Victoria.

Zornes, D., L. Ferkins, and E. Piggot-Irvine. 2015. “Action Research Networks: Role and Purpose in the Evaluation of Research Outcomes and Impacts.” Educational Action Research 24 (1): 97-114. doi: 10.1080/09650792.2015.1045538

< Prev   CONTENTS   Source   Next >