Social science perspectives on evidence use and bridging the gap
The breadth and volume of work in this area has led to a number of attempts to review the literature to try to distil lessons of ‘what works’ for knowledge transfer, or to identify common ‘barriers’ or ‘facilitators’ to evidence use (cf. Contandriopoulos et al. 2010; Davies, Powell and Nutley 2015; Langer, Tripney and Gough 2016; Mitton et al. 2007; Oliver et al. 2014). Yet a number of academic authors have pointed to the conceptual challenges involved in the ways that evidence utilisation has been promoted or studied in this field. Oliver and colleagues, for instance, undertook a ‘critical analysis of the literature’, which concluded that: ‘Much of the research in this area is theoretically naive, focusing primarily on the uptake of research evidence as opposed to evidence defined more broadly, and privileging academics’ research priorities over those of policymakers’ (2014, p. 1).
Similarly, Smith has synthesised a number of existing reviews to distil the most common recommendations to help improve knowledge use, such as ensuring that research is accessible, supporting relationships between researchers and decision makers, improving communication channels and providing incentives for evidence utilisation (pp. 20-21). But in reflecting on this body of work, she also notes that ‘the most popular recommendations . . . focus on mechanisms for increasing the chances that particular research projects will be employed by policy makers. This is distinct from trying to improve the use of research in policy’. She further explains that ‘an assumption which is implicit within a great deal of the scholarship on the relationship between research and policy [is] that the use of research is a priori a positive outcome’ (2013, p. 23, emphases in original).
Indeed, one of the biggest challenges facing the knowledge transfer literature has ultimately been the fairly simplistic way in which evidence or research ‘use’ is discussed. Typically ‘use’ is discussed as a single binary variable - as if evidence can be ‘used’ or ‘not used’, ‘taken up’ or ‘not taken up’. There is a further assumption that all actors would agree that research utilisation is a positive thing, as Smith explains in the above quote. However, as described in the previous chapter, critical authors note that there may in fact be many bodies of evidence relevant to a policy decision, with no simple agreement over which ones should be used or when. Social scientists have further explained that there can be many ways to conceptualise evidence use other than simply the direct uptake or implementation of findings from a particular research study. Much writing on this subject points to the work of Carol Weiss, who, in the 1970s, constructed a framework that classifies seven distinct models of ‘research utilisation’ for the social sciences, which are summarised below (Weiss 1979): [1]
In 2007 Nutley et al. further produced a comprehensive volume exploring the many ways through which research informs public services. They include Weiss’ seven meanings of research use, but similarly describe a number of other typologies that have been developed over the years. The authors note that a common distinction made in such typologies is between instrumental use - seeing research directly influence policy and practice - and conceptual use, which captures ‘the complex and often indirect ways in which research can have an impact on the knowledge, understanding, and attitudes of policy makers and practitioners’ (2007, p. 36). Nutley et al. explain that it is policy makers in particular who use research in strategic and technical ways, noting that: ‘Policy makers say that while research is often interesting and helpful . . . it most often “informs” policy, rather than providing a clear steer for action’ (2007, p. 37).
Despite this fairly extensive body of work mapping out the multiple ways in which research can be utilised in policy processes, the EBP literature still overwhelmingly reflects the idea that evidence use is a technical problem-solving exercise (Greenhalgh and Russell 2009). However, this focus on problem solving shows its limitations quite quickly when considering how few policy decisions actually fit this model. Cairney, for example, refers to the EBP approach as capturing an idea of ‘comprehensive rationality’ of an ‘optimal’ policy process - a situation which rests on a large number of ‘rather unrealistic assumptions about who is involved, what they represent, and the best way to make policy’ (2015, p. 15). Similarly, Weiss herself explains that:
It probably takes an extraordinary concatenation of circumstances for research to influence policy decisions directly: a well defined decision situation, a set of policy actors who have responsibility and jurisdiction for making the decision, an issue whose resolution depends at least to some extent on information, identification of the requisite informational need, research that provides the information in terms that match the circumstances within which choices will be made, research findings that are clear-cut, unambiguous, firmly supported, and powerful, that reach decision-makers at the time they are wrestling with the issues, that are comprehensible and understood, and that do not run counter to strong political interests. Because [the] chances are small that all these conditions will fall into line around any one issue, the problem-solving model of research use probably describes a relatively small number of cases.
(1979, p. 428)
Yet, somehow, this ‘relatively small number of cases’ has become the template for the vast majority of work aiming to improve how evidence informs policy.
- [1] Knowledge-driven - research identifies problems, through basic science, tothen solve using applied research (based on a natural science model). 2 Problem-solving - the most common model for ‘research utilisation’ thinking, which ‘involves the direct application of the results of a specific socialscience study to a pending decision’ (Weiss 1979, p. 427). 3 Interactive - a back-and-forth process of learning between policy makers andmultiple sources of information, including research. 4 Political - research used as ‘ammunition’ for pre-decided policy positions. 5 Tactical - research undertaken to deflect criticism or to show that ‘something’ is being done, even if the findings are irrelevant. 6 Enlightenment - an indirect way through which social science research influences thinking more broadly or generally, including working to identifyproblems or convert them into ‘non-problems’. 7 Part of the social intellectual enterprise - social science research as an intellectual pursuit of society, responding to the ‘fads and fancies’ of the time.