The drive to measure

But where is the drive to measure coming from? And what does “measurement” really mean in practice?

There are a number of factors influencing theory and practice around measurement in work with young people. • A growing need to demonstrate value for money, accountability, and transparency in all areas of public spending

• The rise of Payment by Results and outcomes-based commissioning

• The changing profile of philanthropists and the growth of social investment—leading to a greater interest in the impact of investments, and a more “business-like” approach

• An increasing need to address inequality, disadvantage, and exclusion

• The reduction in public spending, meaning services need to be better targeted.

This is unlikely to be a temporary departure from normal service—these shifts represent a fundamental reshaping of funding, commissioning and delivery in work with young people, the way we do business and the way that investment into our services is made. It is a development we need to embrace and to feel confident in—and part of this is about understanding the benefits of building the evidence base for our work. So why should this matter to us? And what do all these terms mean?

Where do outcomes and impact fit in? And how do we know whether we are measuring, monitoring, or evaluating?

The measurement and evaluation of the impact of services is an important part of the wider cycle of planning and commissioning. Findings about what has worked, for whom, and why, are a key output of the commissioning process and need to be fed back to inform future provision.

Outcomes can be defined as the changes your service or programme creates. Impact is the cumulative difference you make, to your target group, but also to the wider community. It is the change that you can attribute to your activities—not what would have happened anyway.

Outcomes and impact can be measured, monitored and/or evaluated. It is important to understand the difference.

Measurement refers to the gathering of data, using a standard unit of measurement, or a measure. This may be counting something (qualifications or accreditation for example) or, in our context, using a particular tool or approach to capture change or difference. Measurement may be for the purpose of monitoring, or for evaluation.

Monitoring involves collecting, analysing and learning from information. This might be tracking the numbers of young people attending a group or programme, exploring whether there are differences based
on gender or age, for example, and changing the activities provided in response. It could also include seeking regular feedback from young people, and using this to inform the development of new programmes. It could also involve using a tool like the Mental Toughness Questionnaire, and recording young people's scores over time.

Evaluation involves making judgements about whether or not a project or programme “works”—whether it has had the impact you intended. Evaluation goes beyond monitoring, and is dependent on a number of other factors, such as being clear about what would have happened to the young people you work with if they had not been involved in your provision: the so-called “counterfactual”.

So why do it? Monitoring and evaluation can be time consuming and challenging. They can reveal difficult facts/findings that you might not want to know, and pose even more questions. However, they should be a vital part of what we do. There are a number of reasons why measurement is critical, particularly in work with young people:

• The impact of negative outcomes on young people and communities: negative outcomes for young people, such as unemployment, poor mental health, early unplanned parenthood or debt have far reaching consequences, impacting on their lives well into adulthood, and in many cases, on their lives of their children too. Young people can bear the scars of these outcomes for a lifetime, and it is vital that we understand the services that can prevent and protect against these outcomes, and support them to grow their reach.

• The costs associated with poor outcomes: the financial cost of unemployment, poor health and family intervention are high, and the social costs can be higher. It is essential that available funds are spent on effective provision that offers value for money.

• The potential to improve outcomes and prevent harm: not all services for young people are beneficial, particularly to those who are most vulnerable or disadvantaged. Equally, we know that there are so many programmes and services that can have a transformative effect on young people's lives. Robust measurement can ensure that harmful interventions are recognised and stopped, and the potential to transform lives is realised.

Reliance on public funding: many services working with young people are heavily reliant on public funding, and have been so historically. There is stiff competition to provide services to young people now, and commissioners and funders want robust evidence that the
services they are being asked to invest in make a difference and offer value for money.

• The importance of advocacy: many organisations working with young people do not just provide services for young people, but also advocate on their behalf. Many organisations seek to change society's perceptions of young people. better measurement could strengthen the case these organisations can make, and help to demonstrate the effectiveness of services in promoting positive outcomes for young people as positive members of our communities.

• The potential to influence policy and practice: if providers were able to evidence the effectiveness of their services, it could lead to a change in policy and practice, moving towards models of earlier intervention and support, and ultimately change policy where it impacts particularly on the most vulnerable and disadvantaged.

Stuart et al.1 usefully summarise the benefits of monitoring and evaluation in work with young people:

• it links individual learning and its impact to both the programme's aims/objectives and the business' needs

• it is a natural part of review for individuals and organisations

• it can clarify what the programme is trying to achieve (content) and how (process)

• it establishes where the programme is working well and further improvements needed

• it closes the loop with feedback on progress against business needs.

But we know there are challenges—the evidence base for personal and social development, and indeed wider services for young people, is patchy and has not benefited from sustained focus or attention. Making causal links between the work we do and the impact and outcomes for young people is hard, and there is little agreement on the sources of value which correspond to our work, and the best way to assess them.

< Prev   CONTENTS   Next >