“Good data”: what counts as evidence?

When the impact of a service or programme is measured, we are effectively gathering data to grow an evidence base. There are two factors that must be considered when thinking about what counts as evidence: reliability and validity. This means that, through data gathering, consistently robust results are achieved over time, even when working
with different groups of young people, and with different staff using the measure. It also means measuring the outcomes that matter, and that the programme intends to have an impact on. This is easier said than done—many services feel they should be measuring impact on long-term outcomes like employment, even when the programme itself focuses on social and emotional capabilities.

There are three key factors that are important in producing good evidence:

• well-defined outcomes—the difference you want to measure

• metrics that provide an accurate standard for measuring that outcome

• a robust methodology—the approach to measuring (or gathering data) fairly.

Making the case about the impact of a programme, and in particular, attributing savings means showing that the programme in question causes improvements in the lives of young people. This means, as far as possible, ruling out other possible causes for these improvements. The normal way to do this is to use a fair comparison. The comparison may be with outcomes for the same young people before they took part in the programme, or with another group of similar young people who

Figure 2. Calculating value.3
are not taking part in the programme. If the only significant difference between the two groups is the programme, then we can say the programme is very likely to have caused the difference. For example, with measures of social and emotional capability, testing using an externally validated score both before and after the programme would normally imply that the programme itself has caused an improvement.

To make robust statements about the difference a programme makes to the lives of young people, data on young people's social and emotional capabilities must be collected using a validated scoring tool and we must be able to compare it with a population average spread of scores.

This approach is not widespread in services for young people, and as a consequence, the evidence base for our work is not as strong as it could be. Traditionally, we have relied too much on before and after measures, with no comparison group (that is, asking young people what they think at the start and end of a programme, but not comparing results with young people who are not taking part), retrospective reports (for example, asking a young person to look back to the beginning of a programme and say what they think has changed), and unvalidated measures created for specific programme (such as in-house questionnaires). Validated tools are based on extensive trialling and research which, over time, provides a sound evidence base that can substantiate claims about “average results” and produce similar scores for individuals in similar situations—a “population average”. This means that if person A scores themselves a five out of ten, we know this indicates roughly the same as a five out of ten for person B. Without this, it is hard to add up

scores for groups of individuals in a way that is meaningful.

Developing your approach: finding what works

Building confidence in the links between social and emotional capabilities and longer-term outcomes for young people is only part of the story. Consistently and robustly measuring the difference that services make to these capabilities—and why—is critical in developing the evidence base for the value of services for young people.

There are many measurement tools and techniques available, some well-known and widely used, and others less so. Different types of tools will produce very different types of evidence. Some tools can be used for evaluation and others for monitoring. There will be a range of reasons for selecting certain tools or approaches: the time involved in using the tool, the level of expertise required, the demands placed on young people, cost and the standard of evidence achieved. Tools can be more appropriate for diagnosis (understanding the needs and wants of young people) than performance management (how well they were met), and it is important to exercise caution regarding the conditions under which the tool is used. Tools used in isolation may give restricted or narrow information, and do not always provide an objective picture. It can be beneficial to use tools alongside other approaches such as case studies or witness testimonies in order to triangulate, or verify, the information gathered.

Deciding on an approach to measurement involves thinking through a number of questions:3

• What is the question you are seeking to answer?

Reflecting on the question you are seeking to answer will influence the evidence you will need to gather. Your question may be more about monitoring (“how can I understand the distance travelled by the young people we work with?”) or evaluation (“what is the difference my service makes to young people who would otherwise not access such support?”). It is also useful to think through the questions that others might have—what would parents want know? What about teachers, funders, or commissioners? And young people themselves?

• What standards of evidence do you want to achieve?

The approach to measurement also needs to be shaped by the standards of evidence you want to achieve. Different approaches such as case studies, or validated measures, will produce very different types of evidence. Different evidence enables you to draw different conclusions, such as the extent to which you can compare one service with another.

• What is proportionate?

A provider working with a small group of young people over a short time scale may decide on a reduced level of measurement, which is proportionate to that cohort. Alternatively, if a provider wants to take a particularly rigorous approach, it may opt to work with a sample
of young people in the first instance, rather than a larger group or the whole cohort.

Proportionality also relates to how often you measure— beginning, middle, and end on a short programme might be burdensome, whereas this may be too infrequent on longer programmes. This also needs to be considered from the perspective of young people, in terms of what proportion of their time with you is taken up with measurement or evaluation.

• Who are you working with, and how?

This involves thinking about both the young people you are working with, and your approach, alongside the agencies and individuals with whom you have relationships.

The young people you work with, and how you work with them, will influence your practical approach to measurement. This may be because you work more in a group work setting than one to one, for example, or because the young people you work with have a disability such as a visual impairment or an autistic spectrum disorder.

Similarly, thinking about who else you work with (schools, local authorities, funders, parents, and so on) can highlight who you need to communicate your impact to, and how. Different stakeholders will respond to different types and standards of evidence.

• What outcomes are you focused on?

The priority outcomes for a programme will influence the approach to measurement. This closely relates to the tool chosen but also when and how often a tool is used, and in what setting. It is also useful to consider what other information might be helpful, and how others can assist. Asking a referral agency for information on next destinations, for example, can add colour or depth to your data, as can asking a school or other institution for wider information about a young person's progress.

• What resources are available?

In practice, available resources often play a strong role in determining the approach to measurement. Resources can include funding to purchase tools and associated training, access to IT systems, or time to embed an approach across a service. Different approaches will make very different resource demands. This is also important
to consider in how data are used. A paper-based approach has little value, for example, if there is no capacity or process to feed the data into a wider system which enables learning from the findings.

 
< Prev   CONTENTS   Next >