Troubled Families: How experimenting could teach us “what works?”

Jason Lowther

 

In this blog on 3rd Feb, I explored the formal Troubled Families Programme (TFP) evaluation and looked at the lessons we can learn in terms of the timing and data quality issues involved. This week I want to consider how designing a more experimental approach into this and future programmes could yield lots more insight into what works where.

The idea of an “experimental” approach to policy and practice echoes enlightenment period thinkers such as Francis Bacon (1561—1626), who promoted an empirical system built on careful experimentation. Donald Campbell’s ideas[1] on ‘reforms as experiments’ argued that social reforms should be routinely linked to rigorous experimental evaluation. ‘Social engineering’ built on ‘social experiments’ became a popular concept in the USA and social science.

Social experiments in America included work in response to a concern that providing even modest income subsidies to the poor would reduce motivation to find and keep jobs. Rossi and Lyall (1976) showed that work disincentives were in fact less than anticipated. In the field of prison rehabilitation, Langley et al. (1972) tested whether group therapy reduced re-offending rates. The results suggested that this approach to group therapy did not affect re-offending rates.

Unfortunately, meaningful experiments proved more difficult than anticipated to deliver in the field, and even robust experiments were often ignored by policy makers. As a result, until recently this experimental approach fell out of favour in social policy, except in the field of medicine.

The term ‘evidence-based medicine’ appears to have been first used by investigators from a US university in the 1990s where it was defined as ‘a systemic approach to analyze published research as the basis of clinical decision making.’ The evidence-based medicine movement considered experiments – specifically, collections of Randomised Controlled Trials (RCTs) subject to systematic reviews – as the “gold standard” of proof of whether interventions “work” or not.

Randomised controlled trials are sometimes not easy to undertake in social policy environments, but they can be done and they can provide surprising results. Starting in 2007, Birmingham City Council evaluated three evidence-based programmes in regular children’s services systems using RCTs[2]. We found that one programme (Incredible Years) yielded reductions in negative parenting behaviours among parents, reductions in child behaviour problems, and improvements in children’s relationships; whereas another (Triple-P) had no significant effects.

What was interesting for practitioners was that the children in all the trials had experienced improvements in their conduct. Only by use of a formal “control” group were we able to see that these “untreated” children were also improving, and so we were able to separate out the additional impacts of the intervention programmes.

There are a number of lessons from this and other past experience that can help practitioners wanting to deliver robust trials to test whether innovations are working (or not). The most important point is: build the evaluation testing into the design of the programme. The Troubled Families Programme could have built an RCT into the rollout of the programme – for example, selecting first year cases randomly from the list of families who were identified as eligible for the scheme. Or introducing the scheme in some council areas a year earlier than others. Or councils could have done this themselves by gradually rolling out the approach in different area teams.

Sandra Nutley and Peter Homel’s review[3] of the New Labour government’s Crime Reduction Programme stressed the importance of balancing the tensions between fidelity to “evidence based” policy (to maximise the chance of impact) and innovation (to ensure relevance to the local context), short-term wins and long-term learning, and evaluator independence (to ensure rigour) versus engagement (to help delivery).

In my final blog on the TFP next time, I explore the potential for “theory-based” approaches to evaluation helping us to understand “what works and why?” in this and other policy areas.

Campbell, D. T. and Russo, M. J. (1999) Social experimentation. Sage Publications, Inc.

Langley, M., Kassebaum, G., Ward, D. A. and Wilner, D. M. 1972. Prison Treatment and Parole Survival. JSTOR.

Nutley, S. and Homel, P. (2006) ‘Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme’, Evidence & Policy: A Journal of Research, Debate and Practice, 2(1), pp. 5-26.

Rossi, P. H. and Lyall, K. (1976) ‘Reforming public welfare’, New York: Russell Sage.

Sanderson, I. (2002) ‘Evaluation, policy learning and evidence‐based policy making’, Public administration, 80(1), pp. 1-22.

White, M. (1999) ‘Evaluating the effectiveness of welfare-to-work: learning from cross-national evidence’, Evaluating Welfare to Work. Report, 67.

[1] Campbell, D. T. and Russo, M. J. (1999) Social experimentation. Sage Publications, Inc.

[2] Little, Michael, et al. “The impact of three evidence-based programmes delivered in public systems in Birmingham, UK.” International Journal of Conflict and Violence (IJCV) 6.2 (2012): 260-272.

[3] Nutley, S. and Homel, P. (2006) ‘Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme’, Evidence & Policy: A Journal of Research, Debate and Practice, 2(1), pp. 5-26.

 

 

lowther-jason

 

Jason Lowther is a senior fellow at INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies.  Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther

Leave a comment