The success of Police and Crime Commissioners in drug harm reduction in the West Midlands

Megan Jones

Police and Crime Commissioners (PCCs) were introduced in 2012, (2011 Police Reform and Social Responsibility Act), representing one of the most radical changes to governance structures in England and Wales. PCCs are directly elected by the public and their statutory functions require them to (1) hold their own police force to account on behalf of the public, (2) set the policing priorities for the area through a police and crime plan and (3) appoint a Chief Constable.

They replace the former Police Authority committee style structure, which was criticised for their lack of visibility and accountability to the public and communities they were designed to serve. The emergence of PCCs was therefore a result of the failings of the previous governance mechanism and a political shift of focus from national to local governance.

In my research, I look at the impact that PCC governance has on drug policy, using the West Midlands police force area as a case study. Drugs policy, and specifically a harm reduction approach*, is just one area of policing and priorities that was used to explore the statutory role of PCC and more broadly, how the role can be interpreted or used wider than its statutory framework.

In August 2019, the latest drug-related death figures were announced by the ONS. They are now the highest on record, with 4,359 deaths in England and Wales recorded in 2018 (ONS, 2019). In the West Midlands, there is a drug-related death every 3 days (West Midlands PCC 2017a). Over 50% of serious and acquisitive crime is to fund an addiction and the cost to society is over £1.4 billion each year (West Midlands PCC 2017a). This topic often divides opinion and can be politicised. However, these debates rarely prevent the considerable damage caused by drugs to often very vulnerable people and wider society. The official national response is focused on enforcement of the law, criminalising individuals for drug possession.

By interviewing a number of key actors within the drug policy arena and as leaders in policing both within forces and PCC’s offices, I looked at how the PCC structure can enable a change in policy. This was combined with desk-based document study of public available document into the drugs policy approach taken in the West Midlands. Four key themes were explored: the statutory role of the PCC; the individual PCC; governance and public opinion; and the approach taken.

My results showed that the PCC role and this new form of civic leadership benefitted from: convening power and their ability to draw upon key partners from across the public sector, lived experience, and third sector. This is an informal mechanism of governance strengthened by public mandate. PCCs have the ability to prioritise by setting their strategic priorities in the police and crime plan. For example, in the West Midlands, the approach to drug policy has been narrowed to focus on high harm drugs (heroin and crack cocaine), thus ensuring ‘deliverability’. This means that limited resources available are more narrowly focused and can have a greater impact. The statutory role of a PCC allows work at pace and decisions to be made quickly, which means that trial and pilot new approaches and innovations.

Of course, there are limitations. PCCs vary across the country and often do not speak with one voice, particularly on drug policy. There are also huge advantages of a good working relationship between Chief Constable and PCC, demonstrated through the joint approach in the West Midlands.

Figure 1: Drivers to drug policy, derived from the findings

My research allowed me to concluded that three key drivers are optimum for delivery of a PCC-led harm reduction approach: using the levers at their disposal, such as the statutory functions, and informal governance mechanisms, such as convening power, which are able to provide the strategic and political coverage required to deliver at pace.

PCCs are unique in the landscape of UK governance and whilst weaknesses in mechanisms designed to reign in their power could be viewed as worrying, in the drug policy space this has allowed for the development of a new approach in the West Midlands, one that is evidence-based and has the ability to save lives, reduce costs and reduce crime.

The potential of PCCs is arguably still being explored, but their ability to test new approaches and work effectively with partners will be essential in other areas of policy, such as the response to serious violence and the potential for an increasing role across the criminal justice system.

PCCs have a number of levers at their disposal, and are able to use informal and formal governance mechanisms to foster real change at the local level and drive forward evidence-based policy.

Megan Jones is the Head of Policy for the West Midlands Police and Crime Commissioner and is a former INLOGOV student, studying on the MSc Public Management programme. She tweets at @MegJ4289.

 

What Do We Miss out on When Policy Evaluation Ignores Broader Social Problems?

Daniel Silver and Stephen Crossley

With local government funding being stretched to breaking point over the last decade, it is more important than ever to know whether investment into policy programmes is making a difference.

Evaluation draws on different social research methods to systematically investigate the design, implementation, and effectiveness of an intervention. Evaluation can produce evidence that can be used to improve accountability and learning within policy-making processes to inform future decision making.

But is the full potential of evaluation being realised?

We recently published an article in Critical Social Policy that demonstrated how the Troubled Families programme evaluation remained within narrow boundaries that limited what could be learnt. The evaluation followed conventional procedures by investigating exclusively whether the intervention has achieved what it set out to do. But this ‘establishment oriented’ approach assumes the policy has been designed perfectly. Many of us recognise that the Troubled Families programme was far from perfect (despite what initial assessments and central government announcements claimed).

The Troubled Families programme set out to ‘turn around’ the lives of the 120,000 most ‘troubled families’ (characterised by crime, anti-social behaviour, truancy or school exclusion and ‘worklessness’) through a ‘family intervention’ approach which advocates a ‘persistent, assertive and challenging’ way of working with family members to change their behaviours but, crucially, not their material circumstances.

Austerity, mentioned in just two of the first phase evaluation reports, was not considered as an issue that might have had an impact on families. Discussions of poor and precarious labour market conditions, cuts to local authority services for children, young people and families, and inadequate housing provision are almost completely neglected in the reports. Individualised criteria such as ‘worklessness’, school exclusion and crime or anti-social behaviour were considered but structural factors such as class, gender, and racial inequalities were not; nor were other issues such as labour market conditions, housing quality and supply, household income or welfare reforms.

The first phase outcome of ‘moving off out-of-work benefits and into continuous employment’ did not take into account the type of work that was secured, or the possible impact that low-paid, poor quality or insecure work may have on family life. Similarly, the desire by the government to see school attendance improve did not necessarily seek to improve the school experience for the child, and there is no evidence of concern for any learning that did or did not take place once attendance had been registered. Such issues were outside of the frames in which the policy had been constructed and so were considered to be outside of the boundaries of investigation for the evaluation. The scope for learning was therefore restricted to within the frames that had been set by national government when the programme had been designed.

So what can be done?

While large-scale evaluations of national programmes will still take place, local councils can add to these with independent, small-scale evaluations. These can adopt a more open approach that examined what happened locally and contextualise the programme within the particular social problems that residents experience.

A more contextualised form of evaluation can broaden the scope of learning beyond the original framing of a policy intervention. Collaboration between councils and participants who have experienced an intervention through locally situated programme evaluations can explore people’s everyday problems and the tangible improvements that have been delivered by an intervention (and what caused these outcomes to happen). Such an approach with ‘troubled families’ would recognise the knowledge, expertise and capabilities of many families in dealing with the vicissitudes of everyday life, including those caused by the government claiming to be helping them via the Troubled Families programme. Analysis of the data can be used to identify shared everyday problems and narratives of impact that show improvements to people’s everyday lives. By building up a picture about what approaches have been successful, an incremental approach to improving policy and culture within local institutions can be developed – based on the ethos of learning by doing.

In addition to learning about what works, we can also develop our knowledge of what problems have been left unresolved. Of course, no single policy intervention can possibly solve every dimension of our complex social problems. This does not necessarily mean a failure of the intervention, but rather that there are broader issues that need to be addressed. Knowing about these issues can produce useful evidence to find out about social needs in the local community that are not being met, and which the Council might be able to address or use the new knowledge to inform future strategies.

Evaluation is often seen as a bolt-on to the policy-making process. But re-purposing evaluation to learn more about social problems and the effectiveness of tailored local solutions can create evidence and ideas that can be used to improve future social policy.

 

Daniel Silver is an ESRC Postdoctoral Fellow in the Institute of Local Government Studies (INLOGOV) at the University of Birmingham. He previously taught politics and research methods at the University of Manchester. His research focuses on evaluation, social policy, research methods, and radical politics.

Stephen Crossley is a Senior Lecturer in Social Policy at Northumbria University. He com- pleted his PhD from Durham University examining the UK government’s Troubled Families Programme in August 2017. His most recent publications are Troublemakers: the construction of ‘troubled families’ as a social problem (Policy Press, 2018) and ‘The UK Government’s Troubled Families Programme: Delivering Social Justice?’, which appeared in the journal Social Inclusion.

Troubled Families: How Experimenting Could Teach Us “What Works?”. Part 2.

Jason Lowther

In my last blog I looked at how designing a more experimental approach into this and future programmes could yield lots of insight into what works where. This week I would like to extend this thinking to look at how “theory-based” approaches could provide further intelligence, and then draw some overall conclusions from this series.

As well as rigorous analysis of quantitative impacts, theory-based approaches to evaluation can help to test ideas of how innovative interventions work in practice – the “how?” question as well as the “what works?” question[1].

For example the Troubled Families practitioners might have developed theories such as:

  • Having consistent engagement with a key worker, and working through a clear action plan, will increase families’ perception of their own agency and progress.
  • Having regular and close engagement with a key worker will enable informal supervision of parenting and reduce risk around child safeguarding concerns.
  • Having support from a key worker and, where needed, specialist health and employment support, will increase entry to employment for people currently on incapacity benefit.

Interestingly each of these appears to be supported by the evaluation evidence, which showed much higher levels of families feeling in control; lower levels of children in need or care; and reduced benefits and employment (compared to controls).

  • Having consistent engagement with a key worker, and working through a clear action plan, will increase families’ perception of their own agency and progress. The evaluation showed almost 70% of TFP families said they felt “in control” and their worst problems were behind them, much higher than in the “control” group of families.
  • Having regular and close engagement with a key worker will enable informal supervision of parenting and reduce risk around child safeguarding concerns. The TFP “final synthesis report”[2] shows the number of children taken into care was a third lower for the TFP families than for the “control” group (p.64).
  • Having support from a key worker and, where needed, specialist health and employment support, will increase entry to employment for people currently on incapacity benefit. Again, the final synthesis report suggest that the weeks on incapacity benefit for TFP families was 8% lower than the controls, and the entry into employment 7% higher (pp.56-57).

 

The TFP evaluation probably rightly writes off these last few examples of apparent positive impacts because there is no consistent pattern of positive results across all those tested. Given that the evaluation didn’t attempt to test particular theoretical hypotheses like this, it is possible that they have occurred through natural random variation. But if a much more targeted search for evidence built on theory delivered these results consistently, that would be worth celebrating.

Next week I will conclude the series by reflecting on the four key lessons we can learn from the TFP evaluation experience.

[1] See Sanderson, I. (2002) ‘Evaluation, policy learning and evidence‐based policy making’, Public administration, 80(1), pp. 1-22. And White, M. (1999) ‘Evaluating the effectiveness of welfare-to-work: learning from cross-national evidence’, Evaluating Welfare to Work. Report, 67.

[2] https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/560499/Troubled_Families_Evaluation_Synthesis_Report.pdf

 

lowther-jason

 

Jason Lowther is a senior fellow at INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies.  Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther

Why co-produce? Accounting for diversity in citizens’ motivations to engage in neighbourhood watch schemes.

Carola van Eijk, Trui Steen & Bram Verschuere

In local communities, citizens are more and more involved in the production of public services. To list just a few examples: citizens take care of relatives or friends through informal care, parents help organizing activities at their children’s school, and neighbours help promoting safety and liveability in their community. In all these instances, citizens complement the activities performed by public professionals like nurses, teachers, neighbourhood workers and police officers; this makes it a ‘co-productive’ effort. But why do people want to co-produce? In our recently published article in Local Government Studies we try to answer that question by focusing on one specific case: local community safety. One of the main conclusions is that citizens have different incentives to co-produce public services, and local governments need to be aware of that.

Simultaneous to the international trend to emphasize citizens’ responsibilities in the delivery of public services, there are also concerns about the potential of co-production to increase the quality and democratization of public service delivery. One important question pertains to who is included and excluded in co-production processes. Not all stakeholders might be willing or feel capable to participate. So, acknowledging the added value of citizens’ efforts and the societal need to increase the potential benefits of co-production, it is important to better understand the motivations and incentives of citizens to co-produce public services. A better insight not only can help local governments to keep those citizens who are already involved motivated, but also to find the right incentives to inspire others to get involved. Yet, despite this relevance, the current co-production literature has no clear-cut answer as the issue of citizens’ motivations to co-produce only recently came to the fore.

In our study, we focus on citizens’ engagement in co-production activities in the domain of safety, more specifically though neighbourhood watch schemes in the Netherlands and Belgium. Members of neighbourhood watch teams keep an eye on their neighbourhood. Often they gather information via citizen patrols on the streets, and report their findings to the police and municipal organization. Their signalling includes issues such as streetlamps not functioning, paving stones being broken, or antisocial behaviour. Furthermore, neighbourhood watch teams often draw attention to windows being open or back doors not being closed. Through the neighbourhood watch scheme, the local government and police thus collaborate to increase social control, stimulate prevention, and increase safety.

The opinions of citizens in co-producing these activities and their motivations for getting engaged in neighbourhood watch schemes are investigated using a ‘Q-methodology’ approach. This research method is especially suitable to study how people think about a certain topic. We asked a total of 64 respondents (30 in Belgium and 34 in the Netherlands) to rank a set of statements from totally disagreement to fully agreement.

Based on the rankings, we were able to identify different groups of co-producers. Each of the groups shares a specific viewpoint on their engagement, emphasizing for example more community-focused motivations or a professional attitude in the collaboration with both police and local government. To illustrate, in Belgium one of the groups identified are ‘protective rationalists’, who join the neighbourhood watch team to increase their own personal safety or the safety of their neighbourhood, but also weigh the rewards (in terms of safety) and costs (in terms of time and efforts). In Netherlands, to give another example, among the groups identified we found ‘normative partners’. These co-producers are convinced their investments help protect the common interest and that simply walking around the neighbourhood brings many results. Furthermore, they highly value partnerships with the police: they do not want to take over police’s tasks but argue they cannot function without the police also being involved.

The study shows that citizens being involved in the co-production of safety through neighbourhood watch schemes cannot be perceived as being similar to each other. Rather, different groups of co-producers can be identified, each of these reflecting a different combination of motivations and ideas. As such, the question addressed above concerning why people co-produce cannot be simply answered: the engagement of citizens to co-produce seems to be triggered by a combination of factors. Local governments that expect citizens to do part of the job previously done by professional organisations need to be aware of the incentives people have to co-produce public services. Their policies and communication strategies need to allow for diversity. For example, people who co-produce from a normative perspective might feel misunderstood when compulsory elements are integrated, while people who perceive their engagement as a professional task might be motivated by the provision of extensive feedback.

 

Foto Carola %28bijgesneden%29.jpgCarola van Eijk holds a position as a PhD-candidate at the Institute of Public Administration at Leiden University. In her research, she focusses on the interaction of both professionals and citizens in processes of co-production. In addition, her research interests include citizen participation at the local level, and crises (particularly blame games).

 

trui02a

Trui Steen is Professor ‘Public Governance and Coproduction of Public Services’  at KU Leuven Public Governance Institute. She  is interested in the governance of public tasks and the role of public service professionals therein. Her research includes diverse topics, such as professionalism, public service motivation, professional-citizen co-production of public services, central-local government relations, and public sector innovation

 

Bram Verschuere 2.jpg

Bram Verschuere is Associate Professor at Ghent University. His research interests include public policy, public administration, coproduction, civil society and welfare policy. 

Troubled Families: How experimenting could teach us “what works?”

Jason Lowther

 

In this blog on 3rd Feb, I explored the formal Troubled Families Programme (TFP) evaluation and looked at the lessons we can learn in terms of the timing and data quality issues involved. This week I want to consider how designing a more experimental approach into this and future programmes could yield lots more insight into what works where.

The idea of an “experimental” approach to policy and practice echoes enlightenment period thinkers such as Francis Bacon (1561—1626), who promoted an empirical system built on careful experimentation. Donald Campbell’s ideas[1] on ‘reforms as experiments’ argued that social reforms should be routinely linked to rigorous experimental evaluation. ‘Social engineering’ built on ‘social experiments’ became a popular concept in the USA and social science.

Social experiments in America included work in response to a concern that providing even modest income subsidies to the poor would reduce motivation to find and keep jobs. Rossi and Lyall (1976) showed that work disincentives were in fact less than anticipated. In the field of prison rehabilitation, Langley et al. (1972) tested whether group therapy reduced re-offending rates. The results suggested that this approach to group therapy did not affect re-offending rates.

Unfortunately, meaningful experiments proved more difficult than anticipated to deliver in the field, and even robust experiments were often ignored by policy makers. As a result, until recently this experimental approach fell out of favour in social policy, except in the field of medicine.

The term ‘evidence-based medicine’ appears to have been first used by investigators from a US university in the 1990s where it was defined as ‘a systemic approach to analyze published research as the basis of clinical decision making.’ The evidence-based medicine movement considered experiments – specifically, collections of Randomised Controlled Trials (RCTs) subject to systematic reviews – as the “gold standard” of proof of whether interventions “work” or not.

Randomised controlled trials are sometimes not easy to undertake in social policy environments, but they can be done and they can provide surprising results. Starting in 2007, Birmingham City Council evaluated three evidence-based programmes in regular children’s services systems using RCTs[2]. We found that one programme (Incredible Years) yielded reductions in negative parenting behaviours among parents, reductions in child behaviour problems, and improvements in children’s relationships; whereas another (Triple-P) had no significant effects.

What was interesting for practitioners was that the children in all the trials had experienced improvements in their conduct. Only by use of a formal “control” group were we able to see that these “untreated” children were also improving, and so we were able to separate out the additional impacts of the intervention programmes.

There are a number of lessons from this and other past experience that can help practitioners wanting to deliver robust trials to test whether innovations are working (or not). The most important point is: build the evaluation testing into the design of the programme. The Troubled Families Programme could have built an RCT into the rollout of the programme – for example, selecting first year cases randomly from the list of families who were identified as eligible for the scheme. Or introducing the scheme in some council areas a year earlier than others. Or councils could have done this themselves by gradually rolling out the approach in different area teams.

Sandra Nutley and Peter Homel’s review[3] of the New Labour government’s Crime Reduction Programme stressed the importance of balancing the tensions between fidelity to “evidence based” policy (to maximise the chance of impact) and innovation (to ensure relevance to the local context), short-term wins and long-term learning, and evaluator independence (to ensure rigour) versus engagement (to help delivery).

In my final blog on the TFP next time, I explore the potential for “theory-based” approaches to evaluation helping us to understand “what works and why?” in this and other policy areas.

Campbell, D. T. and Russo, M. J. (1999) Social experimentation. Sage Publications, Inc.

Langley, M., Kassebaum, G., Ward, D. A. and Wilner, D. M. 1972. Prison Treatment and Parole Survival. JSTOR.

Nutley, S. and Homel, P. (2006) ‘Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme’, Evidence & Policy: A Journal of Research, Debate and Practice, 2(1), pp. 5-26.

Rossi, P. H. and Lyall, K. (1976) ‘Reforming public welfare’, New York: Russell Sage.

Sanderson, I. (2002) ‘Evaluation, policy learning and evidence‐based policy making’, Public administration, 80(1), pp. 1-22.

White, M. (1999) ‘Evaluating the effectiveness of welfare-to-work: learning from cross-national evidence’, Evaluating Welfare to Work. Report, 67.

[1] Campbell, D. T. and Russo, M. J. (1999) Social experimentation. Sage Publications, Inc.

[2] Little, Michael, et al. “The impact of three evidence-based programmes delivered in public systems in Birmingham, UK.” International Journal of Conflict and Violence (IJCV) 6.2 (2012): 260-272.

[3] Nutley, S. and Homel, P. (2006) ‘Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme’, Evidence & Policy: A Journal of Research, Debate and Practice, 2(1), pp. 5-26.

 

 

lowther-jason

 

Jason Lowther is a senior fellow at INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies.  Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther

Troubled Families: Two Secrets to Great Evaluations

Jason Lowther

In this blog last week I explored the (rather flimsy) evidence base available to the developers of the original Troubled Families Programme (TFP) and the potential for “theory of change” approaches to provide useful insights in developing future policy. This week I return to the formal TFP evaluation and look at the lessons we can learn in terms of the timing and data quality issues involved.

The first secret of great evaluation: timing

The experience of the last Labour Government is very instructive here. New Labour appeared as strong advocates of evidence-based policy making, and in particular were committed to extensive use of policy evaluation. Evaluated pilots were completed across a wide range including policies relating to welfare, early years, employment, health and crime. This included summative evaluations of their outcomes and formative evaluations whilst the pilots were underway, attempting to answer the questions “Does this work?” and “How does this work best?”

Ian Sanderson provided a useful overview of Labour’s experience at the end of its first five years in power[i]. He found that one of the critical issues in producing great evaluations (as for great comedy), is timing. Particularly for complex and deep-rooted issues (such as troubled families), it can take a significant time for even the best programmes to have an impact. We now know the (median) time a family remained on the TFP programme was around 15 months.

It can also take significant time for projects to reach the “steady state” conditions, which they would work under when fully implemented. Testing whether there are significant effects can require long-term, in-depth analysis. This doesn’t fit well with the agenda of politicians or managers looking to learn quickly and sometimes to prove a point.

Nutley and Homel’s review[ii] of lessons from New Labour’s Crime Reduction Programme found that “projects generally ran for 12 months and they were just starting to get into their stride when the projects and their evaluations came to an end” (p.19).

In the case of the Troubled Families Programme, the programme started in April 2012, and most of the national data used in the evaluation relates to the 2013-14 financial year. Data on exclusions covered only those starting in the first three months of the programme, whereas data on offending, benefits and employment covered families starting in the first ten months of roll-out.

We know that 70% of the families were still part-way through their engagement with the TFP when their “outcomes” were counted, and around half were still engaged six months later.

It’s now accepted by DCLG that the formal evaluation was run too quickly and for too short a time. There just wasn’t time to demonstrate significant impacts on many outcomes.

The second secret: data quality

Another major element of effective evaluation is the availability of reliable data. Here the independent evaluation had an incredibly difficult job to do. The progress they have made is impressive – for the first time matching a wide range of national data sets, local intelligence and qualitative surveys. But at the end of the day the data quality base of the evaluation is in places poor.

The evaluation couldn’t access data on anti-social behaviour from national data sets, as this is not recorded by the police. This is unfortunate given that the strongest evidence on the effectiveness of TFP-like (Family Intervention) programmes in the past concerns reducing crime and anti-social behaviour[iii].

A chunk of data came from the 152 local authorities. This data was more up to date (October 2015), although only 56 of the councils provided data – which enabled matching to around one quarter of TFP families. The evaluation report acknowledges that this data was “of variable quality”. For example, the spread of academy schools without a duty to co-operate meant there are significant gaps in school attendance data. This will be a serious problem for future evaluations unless academies’ engagement with the wider public service system is assured.

In summary, the TFP evaluation covered too short a period and, despite heroic efforts by DCLG and the evaluators, was based on data of very variable quality and completeness.

Next time we will explore the “impact” evaluation in more detail – looking at how designing a more experimental approach into this and future programmes could yield more robust evaluation conclusions of what works where.

[i] Sanderson, Ian. “Evaluation, policy learning and evidence‐based policy making.” Public administration 80.1 (2002): 1-22.

[ii] Nutley, Sandra, and Peter Homel. “Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme.” Evidence & Policy: A Journal of Research, Debate and Practice 2.1 (2006): 5-26.

[iii] DfE, “Monitoring and evaluation of family intervention services and projects between February 2007 and March 2011”, 2011, available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/184031/DFE-RR174.pdf

 

 

lowther-jason

 

Jason Lowther is a senior fellow at INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies.  Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther