Daniel Silver and Stephen Crossley
With local government funding being stretched to breaking point over the last decade, it is more important than ever to know whether investment into policy programmes is making a difference.
Evaluation draws on different social research methods to systematically investigate the design, implementation, and effectiveness of an intervention. Evaluation can produce evidence that can be used to improve accountability and learning within policy-making processes to inform future decision making.
But is the full potential of evaluation being realised?
We recently published an article in Critical Social Policy that demonstrated how the Troubled Families programme evaluation remained within narrow boundaries that limited what could be learnt. The evaluation followed conventional procedures by investigating exclusively whether the intervention has achieved what it set out to do. But this ‘establishment oriented’ approach assumes the policy has been designed perfectly. Many of us recognise that the Troubled Families programme was far from perfect (despite what initial assessments and central government announcements claimed).
The Troubled Families programme set out to ‘turn around’ the lives of the 120,000 most ‘troubled families’ (characterised by crime, anti-social behaviour, truancy or school exclusion and ‘worklessness’) through a ‘family intervention’ approach which advocates a ‘persistent, assertive and challenging’ way of working with family members to change their behaviours but, crucially, not their material circumstances.
Austerity, mentioned in just two of the first phase evaluation reports, was not considered as an issue that might have had an impact on families. Discussions of poor and precarious labour market conditions, cuts to local authority services for children, young people and families, and inadequate housing provision are almost completely neglected in the reports. Individualised criteria such as ‘worklessness’, school exclusion and crime or anti-social behaviour were considered but structural factors such as class, gender, and racial inequalities were not; nor were other issues such as labour market conditions, housing quality and supply, household income or welfare reforms.
The first phase outcome of ‘moving off out-of-work benefits and into continuous employment’ did not take into account the type of work that was secured, or the possible impact that low-paid, poor quality or insecure work may have on family life. Similarly, the desire by the government to see school attendance improve did not necessarily seek to improve the school experience for the child, and there is no evidence of concern for any learning that did or did not take place once attendance had been registered. Such issues were outside of the frames in which the policy had been constructed and so were considered to be outside of the boundaries of investigation for the evaluation. The scope for learning was therefore restricted to within the frames that had been set by national government when the programme had been designed.
So what can be done?
While large-scale evaluations of national programmes will still take place, local councils can add to these with independent, small-scale evaluations. These can adopt a more open approach that examined what happened locally and contextualise the programme within the particular social problems that residents experience.
A more contextualised form of evaluation can broaden the scope of learning beyond the original framing of a policy intervention. Collaboration between councils and participants who have experienced an intervention through locally situated programme evaluations can explore people’s everyday problems and the tangible improvements that have been delivered by an intervention (and what caused these outcomes to happen). Such an approach with ‘troubled families’ would recognise the knowledge, expertise and capabilities of many families in dealing with the vicissitudes of everyday life, including those caused by the government claiming to be helping them via the Troubled Families programme. Analysis of the data can be used to identify shared everyday problems and narratives of impact that show improvements to people’s everyday lives. By building up a picture about what approaches have been successful, an incremental approach to improving policy and culture within local institutions can be developed – based on the ethos of learning by doing.
In addition to learning about what works, we can also develop our knowledge of what problems have been left unresolved. Of course, no single policy intervention can possibly solve every dimension of our complex social problems. This does not necessarily mean a failure of the intervention, but rather that there are broader issues that need to be addressed. Knowing about these issues can produce useful evidence to find out about social needs in the local community that are not being met, and which the Council might be able to address or use the new knowledge to inform future strategies.
Evaluation is often seen as a bolt-on to the policy-making process. But re-purposing evaluation to learn more about social problems and the effectiveness of tailored local solutions can create evidence and ideas that can be used to improve future social policy.
Daniel Silver is an ESRC Postdoctoral Fellow in the Institute of Local Government Studies (INLOGOV) at the University of Birmingham. He previously taught politics and research methods at the University of Manchester. His research focuses on evaluation, social policy, research methods, and radical politics.
Stephen Crossley is a Senior Lecturer in Social Policy at Northumbria University. He com- pleted his PhD from Durham University examining the UK government’s Troubled Families Programme in August 2017. His most recent publications are Troublemakers: the construction of ‘troubled families’ as a social problem (Policy Press, 2018) and ‘The UK Government’s Troubled Families Programme: Delivering Social Justice?’, which appeared in the journal Social Inclusion.