Evaluating recovery services for children and families affected by domestic abuse

An interview with IFF Research and Verian

Evaluating recovery services for children and families affected by domestic abuse: An interview with IFF Research and Verian

When families experience domestic abuse, recovery can take time and be a complex process, especially for children and non-perpetrating parents who need support to rebuild trust and resilience. Two innovative programmes, Bounce Back 4 Kids, delivered by PACT, and WeMatter, delivered by Victim Support, are designed to provide timely support to help families navigate this journey. With funding from the Cabinet Office’s Evaluation Accelerator Fund, Foundations commissioned pilot randomised controlled trials (RCTs) of these programmes, led by evaluators IFF Research (Bounce Back 4 Kids) and Verian (WeMatter), to help build a stronger evidence base on what works to support children and families affected by domestic abuse. Through these studies, we found that robust & ethical evaluation is possible in this space.

In this interview, Emma Leith and Kecia Painuthara from Foundations’ Evaluation Team speak to Millie Morgan, Research Manager at IFF Research and Pieter Cornel, Director at Verian about what it was like to run the two pilots, some of the challenges and breakthroughs encountered, and what the findings mean for future evaluations in this space.

Evaluating domestic abuse recovery services using RCT methods

Q: How did you collaborate with Victim Support and PACT colleagues to design the evaluation? What were some of the concerns from their perspective and how did you work together to mitigate them? 

Pieter Cornel: We worked closely with Victim Support and Foundations on the WeMatter evaluation design. We had a co-design phase during set-up which was initially meant to be three months, but because of the complexity of the project we needed more time to work through the details. For example, we had to carefully develop the randomisation approach and select the right control group, given the sensitive nature of the service.

We worked collaboratively with Victim Support during the co-design phase to adapt and adjust the proposed evaluation design to ensure delivery wasn’t negatively impacted. For example, we decided to implement a waitlist control design, where participants were randomly allocated to either receive the service straight away or placed on a waiting list for a maximum of 14 weeks before joining their WeMatter group. In this way, we could ensure that all children and young people who entered the trial would receive support.

Millie Morgan: It was important in the set-up phase to make sure we had buy-in from PACT and a shared understanding of what we were trying to achieve for Bounce Back 4 Kids. We used co-design to develop clear support and information materials and avoided using technical language that could create confusion. It’s critical to work closely and foster that relationship so that you can have an open, honest dynamic between evaluators and delivery partners.

We resolved some of PACT’s initial concerns by building evaluation literacy, which we supported by for example, running workshops to address questions like: ‘What is a randomised controlled trial? Why do we need to randomise into a waitlist? Why do some people get to start the service immediately?’

Engaging parents, children, young people and lived experience experts in evaluation

Q: Maintaining engagement with the intervention and evaluation activities is often a challenge in trials. How did you keep children, young people, and parents engaged during the pilot and is there anything that you will do differently going forward?

Millie:  During the pilot, we struggled to recruit and engage families from diverse backgrounds, which impacts the extent to which we can generalise our results. In the full scale RCT we’re making our recruitment materials, including the data collection survey, more accessible to meet the needs of a more diverse range of families. We’ll do this by collecting a bit more information at the referral stage about families’ language requirements, ethnicity, and whether their children have any special educational needs. We will then use this information to tailor communications in a way that engages families.

Another learning that we’re taking forward from the pilot is the importance of emphasising the wider benefits of taking part in research. We found that highlighting a family’s contribution to research and the positive difference they could make for other families in similar situations can be quite motivating for them.

Pieter: One of the challenges we had in the in the pilot was that some children stopped attending WeMatter sessions and did not complete our outcome measures at the end. This meant our data was incomplete, which has implications for the conclusions we can make about WeMatter. To mitigate the risk of missing data, we have re-introduced catch up WeMatter sessions so children don’t fall behind their group if they miss a session.

Q: How did you centre the voices of those with lived experience in the design and delivery of the evaluation?

Pieter:  Ensuring that the voice of participants, and of people with lived experience, is represented in the design of an evaluation is central to its success. Feedback should be sought at multiple time points during the design of the evaluation and of materials for participants, including meaningful consultation with lived experience experts. We worked with two lived experience experts and sector academic partners who reviewed research materials for the interviews conducted with children, young people, and parents. Their feedback improved the researchers’ approach to discussing potentially challenging topics.

I would also want to give a lot of credit and thanks to the Victim Support delivery team, who provided key feedback based on their lived experience as domestic abuse service practitioners. This was critical to successfully implementing a randomised controlled trial design.

Millie: We held a workshop with PACT’s Bounce Back Buddies (parents who have taken part in the intervention before) to test out some of the planned elements of the evaluation, such as our proposed data collection approach. While the validated survey questions could not be changed without compromising their validity, they checked the accompanying language to ensure the wording resonated and did not risk retraumatising participants. Bounce Back Buddies also provided input on BB4K information leaflets and shared the barriers that they or others might face when accessing support and potential mitigations.

We also consulted experts by experience about strategies for reducing the risk of inadvertent disclosure to someone who wasn’t the respondent. This resulted in the introduction of a ‘rapid exit’ button for all data collection browser windows and briefings for telephone interviewers on how to manage risk and avoid disclosures that could put participants in danger.

Reflections on lessons learned and looking ahead to the full-scale trials

Q: In terms of ways of working, what worked well in the pilot and what are you keen to retain for the full-scale trial?

Millie: Of course, there are the basics, such as making sure you’re having regular meetings with your delivery partner and building up that rapport which allows for those more informal discussions where you can all raise and address any concerns. On a more practical level, having clear, timely updates and transparency about where we’re at with data collection is helpful for delivery partners and evaluation teams. For the pilot evaluation, we had a secure shared document that we updated every day to keep us informed about the progress of survey completion throughout the trial. We could check who had completed the survey, who needed follow-up, and who had received their incentives. We’re currently exploring automating these updates for the full-scale trial to reduce the administrative burden and risk of error.

Pieter: It’s been helpful to have had the pilot study because we know so much more now about what worked well, and what could work better. Similarly to Millie, we’re also implementing new strategies in the full-scale trial to alleviate the administrative burden on frontline staff as much as possible. Verian is now taking on all the data collection (whereas baseline and endline data was previously collected by WeMatter facilitators), which ensures that the process is independent and doesn’t affect practitioners’ time to support vulnerable children and young people.  

Q: What advice would you give to other evaluators who might be evaluating similar services using RCT (and other) methods?

Millie: I would say the importance of triangulation and not solely depending on the validated measures for measuring impact. This is something that we iterated during the pilot. Obviously, validated measures are key to high-quality impact evaluation, but we also incorporated some non-validated self-report questions into the survey about changes in parent’s confidence over time.

Deep dive interviews to understand how the programme impacts families from their perspective are so important too, so that we have a richer picture of how parents are feeling and not just numbers on a scale.

Pieter: The main thing that comes to mind is “don’t bite off more than you can chew”. This is an under-evaluated area for a reason; it is highly sensitive and there are so many complexities when working with vulnerable families. So, I think really starting from the basics makes a big difference. Try to hedge against optimism bias in terms of timelines,  the quality of data, and in how easy it is to recruit a large enough sample.

I think the other thing we need to keep in mind as evaluators is to be humble. This is the case with any project, any sector, but particularly in the domestic abuse space. As researchers, you always want things to be clean and fit in the box neatly. And that just isn’t the case in a lot of situations for families. So, keeping that room for flexibility is something that we’re taking forward as well.


Findings from both pilot RCTs were published at the end of November. You can find out more about the BB4K study here and more about the WeMatter study here. For more detail on the work that we’ve been doing as a part of the REACH Plan, visit this page.

SHARE

Related News

Read our latest news and blogs

Cost ratings:

Rated 1: Set up and delivery is low cost, equivalent to an estimated unit cost of less than £100.

Rated 2: Set up and delivery is medium-low cost, equivalent to an estimated unit cost of £100–£499.

Rated 3: Set up and delivery is medium cost, equivalent to an estimated unit cost of £500–£999.

Rated 4: Set up and delivery is medium-high cost, equivalent to an estimated unit cost of £1,000–£2,000.

Rating 5: Set up and delivery is high cost. Equivalent to an estimated unit cost of more than £2,000.

Set up and delivery cost is not applicable, not available, or has not been calculated.

Click here for more information.

Child Outcomes:

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.

Supporting children’s mental health and wellbeing: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing child maltreatment: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Enhancing school achievement & employment: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing crime, violence and antisocial behaviour: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing substance abuse: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing risky sexual behaviour & teen pregnancy: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Preventing obesity and promoting healthy physical development: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient.

Evidence ratings:

Rated 2: Has preliminary evidence of improving a child outcome from a quantitative impact study, but there is not yet evidence of causal impact.

Rated 2+: Meets the level 2 rating and the best available evidence is based on a study which is more rigorous than a level 2 standard but does not meet the level 3 standard.

Rated 3: Has evidence of a short-term positive impact from at least one rigorous study.

Rated 3+: Meets the level 3 rating and has evidence from other studies with a comparison group at level 2 or higher.

Rated 4: Has evidence of a long-term positive impact through at least two rigorous studies.

Rated 4+: Meets the level 4 rating and has at least a third study contributing to the Level 4 rating, with at least one of the studies conducted independently of the intervention provider.

Rating has a *: The evidence base includes mixed findings i.e., studies suggesting positive impact alongside studies, which on balance, indicate no effect or negative impact.

Click here for more information.