So: does the Troubled Families Programme work or not? – Part Two

Jason Lowther

In this blog last week I outlined results of the “impact evaluation” element of the Troubled Families Programme (TFP) and the rather limited pre-existing evidence base the TFP had to be built upon. How can government build on existing evidence in designing its initiatives, and what can we do when there isn’t much in the evidence cupboard?

Many government programmes have the luxury of a relatively strong evidence base on which to build. The previous Labour government’s National Strategy for Neighbourhood Renewal and Sure Start programmes, for instance, could draw on decades of research (collated through the 18 Policy Action Teams) on urban initiatives and the impact of early years experiences on achievements in later life. These sometimes honoured the extant evidence more in the theory than in practice[i], but at least they had foundations on which to build.

As evaluations of the Labour government’s Crime Reduction Programme found[ii], it is a difficult task to translate evidence, which is often “fragmented and inconclusive” into practical government programmes. People skilled at this task are in short supply in central government.

But in the case of the TFP, the most robust element of the existing evidence base was a single evaluation using a “control” of 54 families and focussed on addressing anti-social behaviour through Family Intervention Projects. What can government do when the evidence base is thin?

One strong tradition, particularly around medicine and around welfare policies in the USA, has been the idea of “experimental government” using social experiments to determine whether (and if so how) innovative approaches work in practice. For example, in the last three decades of the 20th century, America’s Manpower Demonstration Research Corporation (MDRC) conducted 30 major random assignment experiments involving nearly 300,000 people.

Historically, randomised controlled trials (RCTs) were viewed by many as the “gold standard” of evaluation by allowing statistically robust assessments of “causality” – whether observed changes are due to the intervention being evaluated. More recent thinking emphasises that evaluations need to be designed in the best way to create robust evidence and answer specific questions. Often this will involve a mixture of methods, both quantitative and qualitative. The TFP evaluation used a mixture of methods but without building in a “control” group of “troubled families” not yet receiving the TFP interventions.

Granger[iii] argued (for area based initiatives), that the range and variety of initiatives and the scale of change in government means that a strict statistical “control” is unfeasible. She argued that it is “virtually impossible” to achieve precise and clear-cut causal attribution and that we need clear, strong theories as a basis for counterfactual reasoning and causal inference.

The TFP evaluation did not develop or test a “theory of change” for the programme. This is a pity, because rigorously testing a theory can help illuminate where and how programmes do (or don’t) have real impact.

There are several other lessons we can learn from the existing literature on evaluation in government, for example the importance of timing and data quality. We’ll look at these next time.

[i] Coote, Anna, Jessica Allen, and David Woodhead. “Finding out what works.” Building knowledge about complex, community-based initiatives. London: Kings Fund (2004), esp. pp. 17-18.

[ii] Nutley, Sandra, and Peter Homel. “Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme.” Evidence & Policy: A Journal of Research, Debate and Practice 2.1 (2006): 5-26.

[iii] Granger, R. C. (1998) ‘Establishing causality in evaluations of comprehensive community initiatives’, New approaches to evaluating community initiatives, 2, pp. 221-46.
lowther-jason

Jason Lowther is a senior fellow at INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies.  Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther

Leave a comment