Changemakers Programme

Implementation & process evaluation

Implementation & process evaluation of the Changemakers programme

Summary

This protocol summarises plans for evaluating the Changemakers programme in four local areas. It will explore how local evidence leadership can be operationalised and attend to local barriers and enablers to the effective implementation of evidence-based interventions.

Who, what, why and how?

Foundations has commissioned Cordis Bright to undertake this Implementation and Process Evaluation of the Changemakers Programme. Changemakers works with local area leaders to  address stubborn barriers to the adoption and sustainability of evidence-based interventions within complex local systems by strengthening evidence use, mobilising effective implementation strategies, and embedding evidence leadership.

There is established evidence that many evidence-based interventions can offer early intervention benefits in delivering positive long-term outcomes for babies, children, young people, and families. Changemakers aims to address gaps between what we know works, and the effective use of evidence and the implementation of interventions in place and over time. Changemakers deploys local evidence leaders in local areas alongside funding for selected evidence-based interventions. Local Evidence Leaders (LELs) can facilitate critical engagement with evidence, support the allocation of resources, and promote evidence use in local partnerships, attending to the relationships and connections that can support stakeholders to thoughtfully engage with and implement appropriate research evidence.

Cordis Bright will evaluate how Changemakers has influenced effective implementation and evidence use in local areas to help ensure that interventions are reaching the right families in the right way at the right time. This includes exploring what difference has been made through local evidence leaders championing evidence-based practice, strengthening partnerships to work strategically together and align goals, and leading discovery and tests of change.

Research Questions

The evaluation adopts Proctor et al.’s Implementation Outcomes Framework (2011) to understand how Local Evidence Leaders influence local systems, stakeholder engagement and evidence use behaviours. Research questions will respond to the exploratory nature of Changemakers as an innovative programme, taking a test-and-learn approach to further understand the ways in which Local Evidence Leaders can be influential in championing and facilitating evidence use and evidence-based decision making in local systems.

This will be achieved through a mixed-methods design that aims to capture the complexity of implementation of the Changemakers programme across diverse local authority contexts. Quantitative and qualitative methods will allow for a comprehensive understanding of the processes, outcomes and impacts associated with the programme.

Research questions are organised under four themes:

  1. Programme theory validation: To what extent is Changemakers’ Theory of Change rooted in evidence?
  2. Implementation feasibility: To what extent has Changemakers deployment followed the dimensions of implementation: adoption; fidelity and adaptation; acceptance, appropriateness and feasibility; Penetration and integration; and cost and sustainability.
  3. Programme differentiation: Does Changemakers work differently in certain conditions?
  4. Perceived impacts: To what extent does Changemakers show evidence of promise – and why?

Delivery Partners

Four Local Authorities: Merton Council, Stockport Council, Wirral Council, City of York Council – supported by Foundations’ Practice Development Team.

Evaluation partners

Due Date

This project is due to be completed by August 2026.
SHARE

Changemakers evaluation protocol

Download

Changemakers intervention protocol

Download

Related Projects

Foundations’ Outcomes Framework and Measures Database

Family Group Conferences: Service design & family experience

Cost ratings:

This rating is based on information that programme providers have supplied about the components and requirements of their programme. Based on this information, EIF rates programmes on a scale from 1 to 5, where 1 indicates the least resource-intensive programmes and 5 the most resource-intensive. 

1: A rating of 1 indicates that a programmes has a low cost to set up and deliver, compared with other interventions reviewed by EIF. This is equivalent to an estimated unit cost of less than £100.

2: A rating of 2 indicates that a programme has a medium-low cost to set up and deliver, compared with other interventions reviewed by EIF. This is equivalent to an estimated unit cost of £100–£499.

3: A rating of 3 indicates that a programme has a medium cost to set up and deliver, compared with other interventions reviewed by EIF. This is equivalent to an estimated unit cost of £500–£999.

4: A rating of 4 indicates that a programme has a medium-high cost to set up and deliver, compared with other interventions reviewed by EIF. This is equivalent to an estimated unit cost of £1,000–£2,000.

5: A rating of 5 indicates that a programme has a high cost to set up and deliver, compared with other interventions reviewed by EIF. This is equivalent to an estimated unit cost of more than £2,000.

Child Outcomes:

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.

Supporting children’s mental health and wellbeing: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing child maltreatment: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Enhancing school achievement & employment: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing crime, violence and antisocial behaviour: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing substance abuse: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing risky sexual behaviour & teen pregnancy: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing obesity and promoting healthy physical development: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Evidence ratings:

The evidence ratings distinguish five levels of strength of evidence. This is not a rating of the scale of impact but of the degree to which a programme has been shown to have a positive, causal impact on specific child outcomes.

Level 2: Recognises programmes with preliminary evidence of improving a child outcome, but where an assumption of causal impact cannot be drawn.

Level 2+: The programme will have observed a significant positive child outcome in an evaluation meeting all of the criteria for a level 2 evaluation, but also involving a treatment and comparison group. There is baseline equivalence between the treatment and comparison‐group participants on key demographic variables of interest to the study and baseline measures of outcomes (when feasible).

Level 3: Recognises programmes with evidence of a short-term positive impact from at least one rigorous evaluation – that is, where a judgment about causality can be made.

Level 3+: The programme will have obtained evidence of a significant positive child outcome through an efficacy study, but may also have additional consistent positive evidence from other evaluations (occurring under ideal circumstances or real world settings) that do not meet this criteria, thus keeping it from receiving an assessment of 4 or higher.

Level 4: Recognises programmes with evidence of a long-term positive impact through multiple rigorous evaluations. At least one of these studies must have evidence of improving a child outcome lasting a year or longer.