What Do We Miss out on When Policy Evaluation Ignores Broader Social Problems?

Daniel Silver and Stephen Crossley

With local government funding being stretched to breaking point over the last decade, it is more important than ever to know whether investment into policy programmes is making a difference.

Evaluation draws on different social research methods to systematically investigate the design, implementation, and effectiveness of an intervention. Evaluation can produce evidence that can be used to improve accountability and learning within policy-making processes to inform future decision making.

But is the full potential of evaluation being realised?

We recently published an article in Critical Social Policy that demonstrated how the Troubled Families programme evaluation remained within narrow boundaries that limited what could be learnt. The evaluation followed conventional procedures by investigating exclusively whether the intervention has achieved what it set out to do. But this ‘establishment oriented’ approach assumes the policy has been designed perfectly. Many of us recognise that the Troubled Families programme was far from perfect (despite what initial assessments and central government announcements claimed).

The Troubled Families programme set out to ‘turn around’ the lives of the 120,000 most ‘troubled families’ (characterised by crime, anti-social behaviour, truancy or school exclusion and ‘worklessness’) through a ‘family intervention’ approach which advocates a ‘persistent, assertive and challenging’ way of working with family members to change their behaviours but, crucially, not their material circumstances.

Austerity, mentioned in just two of the first phase evaluation reports, was not considered as an issue that might have had an impact on families. Discussions of poor and precarious labour market conditions, cuts to local authority services for children, young people and families, and inadequate housing provision are almost completely neglected in the reports. Individualised criteria such as ‘worklessness’, school exclusion and crime or anti-social behaviour were considered but structural factors such as class, gender, and racial inequalities were not; nor were other issues such as labour market conditions, housing quality and supply, household income or welfare reforms.

The first phase outcome of ‘moving off out-of-work benefits and into continuous employment’ did not take into account the type of work that was secured, or the possible impact that low-paid, poor quality or insecure work may have on family life. Similarly, the desire by the government to see school attendance improve did not necessarily seek to improve the school experience for the child, and there is no evidence of concern for any learning that did or did not take place once attendance had been registered. Such issues were outside of the frames in which the policy had been constructed and so were considered to be outside of the boundaries of investigation for the evaluation. The scope for learning was therefore restricted to within the frames that had been set by national government when the programme had been designed.

So what can be done?

While large-scale evaluations of national programmes will still take place, local councils can add to these with independent, small-scale evaluations. These can adopt a more open approach that examined what happened locally and contextualise the programme within the particular social problems that residents experience.

A more contextualised form of evaluation can broaden the scope of learning beyond the original framing of a policy intervention. Collaboration between councils and participants who have experienced an intervention through locally situated programme evaluations can explore people’s everyday problems and the tangible improvements that have been delivered by an intervention (and what caused these outcomes to happen). Such an approach with ‘troubled families’ would recognise the knowledge, expertise and capabilities of many families in dealing with the vicissitudes of everyday life, including those caused by the government claiming to be helping them via the Troubled Families programme. Analysis of the data can be used to identify shared everyday problems and narratives of impact that show improvements to people’s everyday lives. By building up a picture about what approaches have been successful, an incremental approach to improving policy and culture within local institutions can be developed – based on the ethos of learning by doing.

In addition to learning about what works, we can also develop our knowledge of what problems have been left unresolved. Of course, no single policy intervention can possibly solve every dimension of our complex social problems. This does not necessarily mean a failure of the intervention, but rather that there are broader issues that need to be addressed. Knowing about these issues can produce useful evidence to find out about social needs in the local community that are not being met, and which the Council might be able to address or use the new knowledge to inform future strategies.

Evaluation is often seen as a bolt-on to the policy-making process. But re-purposing evaluation to learn more about social problems and the effectiveness of tailored local solutions can create evidence and ideas that can be used to improve future social policy.

 

Daniel Silver is an ESRC Postdoctoral Fellow in the Institute of Local Government Studies (INLOGOV) at the University of Birmingham. He previously taught politics and research methods at the University of Manchester. His research focuses on evaluation, social policy, research methods, and radical politics.

Stephen Crossley is a Senior Lecturer in Social Policy at Northumbria University. He com- pleted his PhD from Durham University examining the UK government’s Troubled Families Programme in August 2017. His most recent publications are Troublemakers: the construction of ‘troubled families’ as a social problem (Policy Press, 2018) and ‘The UK Government’s Troubled Families Programme: Delivering Social Justice?’, which appeared in the journal Social Inclusion.

England’s over-centralisation – Part 2: It IS instinctive

Chris Game

There was much in Jessica Studdert’s recent blog to agree with and applaud, but one sentence particularly struck me – the one opening her fourth paragraph: “The centralised response isn’t just structural, at times it has felt deeply instinctive.”.

So, equally instinctively, I did what even an erstwhile academic does during a lockdown – some heavyweight research, naturally. Like re-watching and content analysing the first 69 Government Covid-19 daily press conferences – one of those crisis features that, like the Thursday evening clapping, lives on because no one knows quite how to stop it.

I exaggerated with the ‘heavyweight’ bit, but I did count – sorry, totalise – the press conferences. So, first question: Which minister, Johnson excepted, was the first to front one?

No, not Foreign Secretary Dominic Raab. As First Secretary of State, he stood in while Johnson was hospitalised, but was actually eighth minister to feature. Surely, then, Health and Social Care Secretary, Matt Hancock. Nope, though he and his permanent pink tie have currently clocked up more appearances than Johnson himself.

Struggling? Chancellor of the Exchequer, Rishi Sunak? Hardly Robert Jenrick, Secretary of State for Housing, Communities and Local Government – for all the considerations touched on in Studdert’s blog. Surely not Home Secretary Priti Patel, despite being apparently the only woman minister capable of reading from a lectern.

They’ve done four, five and three respectively, but the shooting star we are looking for is Environment, FOOD and Rural Affairs Secretary, George Eustice. How short are our memories. His brief includes the so-called food supply chain, and this was late March – panic-buying, pasta-hoarding weekend.

Now the seriously tricky question. How many winning elections to serve as a plain local government councillor – not London Mayor – have all 12 featured Ministers fought between them? Maybe not a huge number? One!

One four-year term of elected local government experience between the lot of them. It was served by then 24-year old Gavin Williamson, now Education Secretary, giving English primary schools his considered judgement on when they should reopen.

It’s easy to mock – really easy – but there are archive pictures of Williamson doing his thing as North Yorkshire County Council’s ‘Champion of Youth Issues’ . Making him, I believe, alone among that TV-trusted Cabinet dozen to have even minimal first-hand insight into how local government operates in the policy field for which he is responsible.

The others can tell you lots, variously, about banking (Hancock), hedge fund management (Sunak), litigation (Raab), corporate finance (Alok Sharma), corporate law (Jenrick), public relations (Eustice, Patel), journalism (Johnson, Gove), marketing (Grant Shapps), Conservative Central Office (Patel, Oliver Dowden).

But actually experiencing what they presumably aspired to do – campaigning, meeting constituents, getting elected, representing people, learning about the provision and funding of public services, the whole government and public administration thing – for some reason never grabbed them or even struck them as career-relevant.

Which today means they know virtually nothing at first-hand about some of the vital stuff local governments do, often to the unawareness of even their own publics: emergency contingency planning, air quality monitoring, water testing, pest control, health and safety at work inspection – oh yes, and communicable disease investigation and outbreak control.

Time for a brief digression on the changing meaning of the word ‘nuisance’. It was one of my mother’s favourite words, applied frequently to my sister and myself, but to almost any usually minor upset to her daily life routine. Mask-wearing and disinfecting supermarket trolley handles would be a ‘nuisance’, not the wretched pandemic itself.

Yet the etymology of ‘nuisance’ is the Latin ‘nocere’ – to harm – and its original 15th Century meaning could quite conceivably be applied to Covid-19 and its capacity to inflict serious and even fatal harm.

The mid-19th Century predecessor of today’s Director of Public Health in Birmingham, Dr Justin Varney, would therefore have boasted the title of Nuisance Inspector – his nuisance agenda including factory air pollution, small-pox and cholera outbreaks, and sanitation, with the first generation of public urinals.

Nuisance Inspectors could not by themselves transform towns and cities, but they played a huge part. As do their modern-day successors – Public or Environmental Health Inspectors. Those successors, however – the ones that have survived the past decade of local government funding and employment cuts – could and should, as Studdert noted, have been doing even more.

The Chartered Institute of Environmental Health reckons there are some 5,000 Environmental Health Officers (EHOs) working in UK local councils. All have job descriptions including responsibilities like “investigating outbreaks of infectious diseases and preventing them spreading further.”

That’s what they do – test, track, trace and treat people with anything from salmonella to sexually transmitted diseases – in areas, moreover, with which they are totally familiar and have networks of contacts. ‘Shoe-leather epidemiology’ is the technical term – seriously.

So presumably, as in other countries – South Korea, Singapore, Germany, Ireland – these EHOs will have been reassigned from other work and spent their time contact tracing?

Rhetorical question – we all know the answers. From early March, contrary to World Health Organisation guidelines, our Government’s big ideas were to ‘delay’ the spread of Covid-19, then develop vital (now less vital) smartphone apps.

This enabled the consequently limited scale of contact-tracing to be undertaken centrally by staff newly recruited by Public Health England – the executive agency of Matt Hancock’s Health and Social Care Department created in the ill-conceived NHS upheaval in 2012.

Insufficient, inexperienced staff doing a job crying out for the skills, knowledge and contacts of council EHOs, who instead were monitoring social distancing rules in pubs, clubs and restaurants.

There are almost always costs in ‘keeping it central’, but, as we have seen, for so many ministers, it must be instinctive. It’s all they and most of their civil servants know at first hand. The alternative would be funding and at least sharing data with pesky local authorities, thereby losing some of their precious control.

Finally, last weekend, all other options exhausted, the Government did allocate a ring-fenced £300 million to English councils to play a leading role, starting immediately, in tracking and tracing people suspected of being at risk of Covid-19.

This time, tragically, the cost of blinkered, prejudiced, self-protective government was paid in lives.

Covid-19: Is Government Really “Led By The Science”?

Jason Lowther, Director of the Institute of Local Government Studies, University of Birmingham (not representing the views of the university)

In the midst of the EU Referendum campaign, Michael Gove famously commented that “people in this country have had enough of experts”. No longer. Fast forward four years, Gove (and every other minister) is sharing press conferences with professors and claiming to be “led by the science”. But with the UK topping the European tables of Covid-19 deaths, what does that actually mean? And is “science” the only type of knowledge we need to make life-saving policy in the Covid crisis?

Making policy is difficult and complex – particularly in a crisis, and especially one caused by a virus that didn’t exist in humans six months ago but has the potential to kill millions. The information we have is incomplete, inaccurate and difficult to interpret. Politicians (and experts) are under huge pressure, recognising that their inevitable mistakes may well cost lives. My research has shown that even in more modestly stressful and novel contexts, policy makers don’t just use experts to answer questions, but also their public claims to be listening to experts are useful politically. Christina Boswell identified the ‘legitimising’ and ‘substantiating’ functions of experts. Listening (or at least appearing to be listening) to experts can give the public confidence that politicians’ decisions are well founded, and lend authority to their policy positions (such as when to re-open golf courses).

Covid-19 is a global issue requiring local responses, so the spatial aspects of using experts and evidence are particularly important. Governments need to learn quickly from experiences in countries at later stages in the epidemic, including countries where historic relations may be difficult. Central governments also have to learn quickly what is practical and working (or not) on the ground in the specific contexts of local areas, avoiding the vain attempt to manage every aspect from Whitehall. My research shows that the careful use of evidence can help here, developing shared understandings which can overcome historic blocks and enable effective collaboration. But in Covid-19 it seems central government too often is opting out of building these shared understandings. Experience in other countries has sometimes been ignored. Vital knowledge from local areas has not been sought or used. Instead of transparently sharing the evidence as decisions are developed, evidence has been hidden or heavily redacted, breaking a basic principle of good science and sacrificing the opportunity to build shared understandings open to critical challenge.

What counts as “evidence” anyway? Different professional and organisational cultures value different kinds of knowledge as important and reliable. In my work with combined authorities, I found that bringing mental health practitioners into policy discussions had opened up a wide range of new sources of knowledge, such as the voices of people with lived experience. And, carefully managed, this wider range of types of knowledge can lead to better decisions. The Government’s network of scientific advisory committees, once we finally were told who was involved, seems to have missed some important voices. The editor of the Lancet, Richard Horton, argued that expertise around public health and intensive medical care should have been in the room. I would also argue that having practical knowledge from local councils and emergency planners could help avoid recommendations that prove impossible to implement effectively. As Kieron Flanagan has noted recently, we learned in the inquiry into the BSE crisis that esteemed experts can still make recommendations which are impossible to implement in practice.

Making a successful recovery will require government quickly to learn lessons from (their own and others’) mistakes so far. Expert advice and relevant data should be published, quickly and in full – treating the public and partners as adults. Key experts for this phase (including knowledge of local public health, economic development, schools, city centres and transport) should be brought into the discussions as equal partners – not simply the “hired help” to do a list of tasks ministers have dreamt up in a Whitehall basement. Then we can have plans that are well founded, widely supported, and have the best chance of practical success. Our future, in fact our very lives, depend on it.

This post was originally published in The Municipal Journal.

 

lowther-jason

Jason Lowther is the Director of INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies.  Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther

Troubled Families: How Experimenting Could Teach Us “What Works?”. Part 2.

Jason Lowther

In my last blog I looked at how designing a more experimental approach into this and future programmes could yield lots of insight into what works where. This week I would like to extend this thinking to look at how “theory-based” approaches could provide further intelligence, and then draw some overall conclusions from this series.

As well as rigorous analysis of quantitative impacts, theory-based approaches to evaluation can help to test ideas of how innovative interventions work in practice – the “how?” question as well as the “what works?” question[1].

For example the Troubled Families practitioners might have developed theories such as:

  • Having consistent engagement with a key worker, and working through a clear action plan, will increase families’ perception of their own agency and progress.
  • Having regular and close engagement with a key worker will enable informal supervision of parenting and reduce risk around child safeguarding concerns.
  • Having support from a key worker and, where needed, specialist health and employment support, will increase entry to employment for people currently on incapacity benefit.

Interestingly each of these appears to be supported by the evaluation evidence, which showed much higher levels of families feeling in control; lower levels of children in need or care; and reduced benefits and employment (compared to controls).

  • Having consistent engagement with a key worker, and working through a clear action plan, will increase families’ perception of their own agency and progress. The evaluation showed almost 70% of TFP families said they felt “in control” and their worst problems were behind them, much higher than in the “control” group of families.
  • Having regular and close engagement with a key worker will enable informal supervision of parenting and reduce risk around child safeguarding concerns. The TFP “final synthesis report”[2] shows the number of children taken into care was a third lower for the TFP families than for the “control” group (p.64).
  • Having support from a key worker and, where needed, specialist health and employment support, will increase entry to employment for people currently on incapacity benefit. Again, the final synthesis report suggest that the weeks on incapacity benefit for TFP families was 8% lower than the controls, and the entry into employment 7% higher (pp.56-57).

 

The TFP evaluation probably rightly writes off these last few examples of apparent positive impacts because there is no consistent pattern of positive results across all those tested. Given that the evaluation didn’t attempt to test particular theoretical hypotheses like this, it is possible that they have occurred through natural random variation. But if a much more targeted search for evidence built on theory delivered these results consistently, that would be worth celebrating.

Next week I will conclude the series by reflecting on the four key lessons we can learn from the TFP evaluation experience.

[1] See Sanderson, I. (2002) ‘Evaluation, policy learning and evidence‐based policy making’, Public administration, 80(1), pp. 1-22. And White, M. (1999) ‘Evaluating the effectiveness of welfare-to-work: learning from cross-national evidence’, Evaluating Welfare to Work. Report, 67.

[2] https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/560499/Troubled_Families_Evaluation_Synthesis_Report.pdf

 

lowther-jason

 

Jason Lowther is a senior fellow at INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies.  Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther

Troubled Families: How experimenting could teach us “what works?”

Jason Lowther

 

In this blog on 3rd Feb, I explored the formal Troubled Families Programme (TFP) evaluation and looked at the lessons we can learn in terms of the timing and data quality issues involved. This week I want to consider how designing a more experimental approach into this and future programmes could yield lots more insight into what works where.

The idea of an “experimental” approach to policy and practice echoes enlightenment period thinkers such as Francis Bacon (1561—1626), who promoted an empirical system built on careful experimentation. Donald Campbell’s ideas[1] on ‘reforms as experiments’ argued that social reforms should be routinely linked to rigorous experimental evaluation. ‘Social engineering’ built on ‘social experiments’ became a popular concept in the USA and social science.

Social experiments in America included work in response to a concern that providing even modest income subsidies to the poor would reduce motivation to find and keep jobs. Rossi and Lyall (1976) showed that work disincentives were in fact less than anticipated. In the field of prison rehabilitation, Langley et al. (1972) tested whether group therapy reduced re-offending rates. The results suggested that this approach to group therapy did not affect re-offending rates.

Unfortunately, meaningful experiments proved more difficult than anticipated to deliver in the field, and even robust experiments were often ignored by policy makers. As a result, until recently this experimental approach fell out of favour in social policy, except in the field of medicine.

The term ‘evidence-based medicine’ appears to have been first used by investigators from a US university in the 1990s where it was defined as ‘a systemic approach to analyze published research as the basis of clinical decision making.’ The evidence-based medicine movement considered experiments – specifically, collections of Randomised Controlled Trials (RCTs) subject to systematic reviews – as the “gold standard” of proof of whether interventions “work” or not.

Randomised controlled trials are sometimes not easy to undertake in social policy environments, but they can be done and they can provide surprising results. Starting in 2007, Birmingham City Council evaluated three evidence-based programmes in regular children’s services systems using RCTs[2]. We found that one programme (Incredible Years) yielded reductions in negative parenting behaviours among parents, reductions in child behaviour problems, and improvements in children’s relationships; whereas another (Triple-P) had no significant effects.

What was interesting for practitioners was that the children in all the trials had experienced improvements in their conduct. Only by use of a formal “control” group were we able to see that these “untreated” children were also improving, and so we were able to separate out the additional impacts of the intervention programmes.

There are a number of lessons from this and other past experience that can help practitioners wanting to deliver robust trials to test whether innovations are working (or not). The most important point is: build the evaluation testing into the design of the programme. The Troubled Families Programme could have built an RCT into the rollout of the programme – for example, selecting first year cases randomly from the list of families who were identified as eligible for the scheme. Or introducing the scheme in some council areas a year earlier than others. Or councils could have done this themselves by gradually rolling out the approach in different area teams.

Sandra Nutley and Peter Homel’s review[3] of the New Labour government’s Crime Reduction Programme stressed the importance of balancing the tensions between fidelity to “evidence based” policy (to maximise the chance of impact) and innovation (to ensure relevance to the local context), short-term wins and long-term learning, and evaluator independence (to ensure rigour) versus engagement (to help delivery).

In my final blog on the TFP next time, I explore the potential for “theory-based” approaches to evaluation helping us to understand “what works and why?” in this and other policy areas.

Campbell, D. T. and Russo, M. J. (1999) Social experimentation. Sage Publications, Inc.

Langley, M., Kassebaum, G., Ward, D. A. and Wilner, D. M. 1972. Prison Treatment and Parole Survival. JSTOR.

Nutley, S. and Homel, P. (2006) ‘Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme’, Evidence & Policy: A Journal of Research, Debate and Practice, 2(1), pp. 5-26.

Rossi, P. H. and Lyall, K. (1976) ‘Reforming public welfare’, New York: Russell Sage.

Sanderson, I. (2002) ‘Evaluation, policy learning and evidence‐based policy making’, Public administration, 80(1), pp. 1-22.

White, M. (1999) ‘Evaluating the effectiveness of welfare-to-work: learning from cross-national evidence’, Evaluating Welfare to Work. Report, 67.

[1] Campbell, D. T. and Russo, M. J. (1999) Social experimentation. Sage Publications, Inc.

[2] Little, Michael, et al. “The impact of three evidence-based programmes delivered in public systems in Birmingham, UK.” International Journal of Conflict and Violence (IJCV) 6.2 (2012): 260-272.

[3] Nutley, S. and Homel, P. (2006) ‘Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme’, Evidence & Policy: A Journal of Research, Debate and Practice, 2(1), pp. 5-26.

 

 

lowther-jason

 

Jason Lowther is a senior fellow at INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies.  Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther

Troubled Families: Two Secrets to Great Evaluations

Jason Lowther

In this blog last week I explored the (rather flimsy) evidence base available to the developers of the original Troubled Families Programme (TFP) and the potential for “theory of change” approaches to provide useful insights in developing future policy. This week I return to the formal TFP evaluation and look at the lessons we can learn in terms of the timing and data quality issues involved.

The first secret of great evaluation: timing

The experience of the last Labour Government is very instructive here. New Labour appeared as strong advocates of evidence-based policy making, and in particular were committed to extensive use of policy evaluation. Evaluated pilots were completed across a wide range including policies relating to welfare, early years, employment, health and crime. This included summative evaluations of their outcomes and formative evaluations whilst the pilots were underway, attempting to answer the questions “Does this work?” and “How does this work best?”

Ian Sanderson provided a useful overview of Labour’s experience at the end of its first five years in power[i]. He found that one of the critical issues in producing great evaluations (as for great comedy), is timing. Particularly for complex and deep-rooted issues (such as troubled families), it can take a significant time for even the best programmes to have an impact. We now know the (median) time a family remained on the TFP programme was around 15 months.

It can also take significant time for projects to reach the “steady state” conditions, which they would work under when fully implemented. Testing whether there are significant effects can require long-term, in-depth analysis. This doesn’t fit well with the agenda of politicians or managers looking to learn quickly and sometimes to prove a point.

Nutley and Homel’s review[ii] of lessons from New Labour’s Crime Reduction Programme found that “projects generally ran for 12 months and they were just starting to get into their stride when the projects and their evaluations came to an end” (p.19).

In the case of the Troubled Families Programme, the programme started in April 2012, and most of the national data used in the evaluation relates to the 2013-14 financial year. Data on exclusions covered only those starting in the first three months of the programme, whereas data on offending, benefits and employment covered families starting in the first ten months of roll-out.

We know that 70% of the families were still part-way through their engagement with the TFP when their “outcomes” were counted, and around half were still engaged six months later.

It’s now accepted by DCLG that the formal evaluation was run too quickly and for too short a time. There just wasn’t time to demonstrate significant impacts on many outcomes.

The second secret: data quality

Another major element of effective evaluation is the availability of reliable data. Here the independent evaluation had an incredibly difficult job to do. The progress they have made is impressive – for the first time matching a wide range of national data sets, local intelligence and qualitative surveys. But at the end of the day the data quality base of the evaluation is in places poor.

The evaluation couldn’t access data on anti-social behaviour from national data sets, as this is not recorded by the police. This is unfortunate given that the strongest evidence on the effectiveness of TFP-like (Family Intervention) programmes in the past concerns reducing crime and anti-social behaviour[iii].

A chunk of data came from the 152 local authorities. This data was more up to date (October 2015), although only 56 of the councils provided data – which enabled matching to around one quarter of TFP families. The evaluation report acknowledges that this data was “of variable quality”. For example, the spread of academy schools without a duty to co-operate meant there are significant gaps in school attendance data. This will be a serious problem for future evaluations unless academies’ engagement with the wider public service system is assured.

In summary, the TFP evaluation covered too short a period and, despite heroic efforts by DCLG and the evaluators, was based on data of very variable quality and completeness.

Next time we will explore the “impact” evaluation in more detail – looking at how designing a more experimental approach into this and future programmes could yield more robust evaluation conclusions of what works where.

[i] Sanderson, Ian. “Evaluation, policy learning and evidence‐based policy making.” Public administration 80.1 (2002): 1-22.

[ii] Nutley, Sandra, and Peter Homel. “Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme.” Evidence & Policy: A Journal of Research, Debate and Practice 2.1 (2006): 5-26.

[iii] DfE, “Monitoring and evaluation of family intervention services and projects between February 2007 and March 2011”, 2011, available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/184031/DFE-RR174.pdf

 

 

lowther-jason

 

Jason Lowther is a senior fellow at INLOGOV. His research focuses on public service reform and the use of “evidence” by public agencies.  Previously he led Birmingham City Council’s corporate strategy function, worked for the Audit Commission as national value for money lead, for HSBC in credit and risk management, and for the Metropolitan Police as an internal management consultant. He tweets as @jasonlowther