In this blog last week I outlined the roller coaster trajectory of the Troubled Families Programme in the media, from saviour of all England’s most “troubled families”, to a wasteful and failed £1bn vanity project in under five months. This despite independent evaluators finding the programme has radically transformed support for these families, and the families themselves saying that it has worked for them.
In most government evaluations, that is where the story would stop. Yet another tremendously successful project from Whitehall. But the DCLG (with a little encouragement from Treasury) were much braver. They wanted to know how many of these improvements would have happened in any case, even without the Troubled Families Programme (TFP). This is a dangerous question to ask. And quite a tough one to answer.
In October, two months after Newsnight leaked the report and claimed that the Government had sat on the interim findings for a year, DCLG published the independent evaluators’ answer to that question. In measured academic language it concluded:
“we were unable to find consistent evidence that the Troubled Families Programme had any significant or systematic impact” (p.69).
Jonathan Portes, one of the authors of the evaluation report, said in a personal blog post at the time that he felt it showed the Programme was:
“a perfect case study of how the manipulation and misrepresentation of statistics by politicians and civil servants – from the Prime Minister downwards – led directly to bad policy and, frankly, to the wasting of hundreds of millions of pounds of taxpayers’ money.”
But the TFP evaluation is not a textbook case of well planned, robust independent analysis of a systematically implemented evidence-based intervention. The TFP was originally designed on a very limited evidence base. Often for excellent reasons, it’s been implemented differently in different local areas. The evaluation kicked in too soon to capture critical effects on these families in deeply challenging situations. The data used was inadequate. The evaluators had to make up a control group. The approach didn’t capture any of the richness or diversity of local working. And there was little engagement between triumphalist politicians and deeply critical commentators. I’ll explore each of these issues in coming blogs.
The original evidence base for the TFP was very limited, as I outlined in my blog in the Guardian back in 2013. There was only a single robust evaluation involving a “control” group of 56 families in the Family Intervention Programme (FIP) targeted on perpetrators of Anti-Social Behaviour (ASB) between 2007 and 2011.
The FIP evaluators found:
“ASB FIPs reduce crime and ASB issues amongst the families they work with. In addition, within our sample, education and employment outcomes are notably, although not statistically significantly, better for a FIP. However, there is little evidence that ASB FIPs generate better outcomes than ‘non-FIP’ interventions on family functioning or health issues” (p.83).
They estimated the FIP families were almost twice as likely to resolve crime and ASB issues (59% fully achieved compared to 29% in the control group).
This lack of an extensive evidence base of course makes it even more important that a programme is designed and delivered in a way that enables reliable learning, as local areas experiment with new ways of working and observe the results. We know quite a lot from previous programmes about how to do this, and the pitfalls to avoid.
In my blog next week I’ll explore what these lessons are, and whether they were applied in the TFP.
Jason Lowther is a senior fellow at INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies. Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther