The LGA are Right – In the Team Benchmarking Stakes, Residents’ Panels Don’t Even Medal

Credit where it’s due – in this case to the Local Government Association’s recent decision that data gathered from local residents’ panels about their views of and satisfaction with their councils cannot be used for benchmarking purposes. The ruling could have come sooner and will be criticised by some of the LGA’s own member authorities, but it is surely right. The reverse decision would have damaged the interests of local government in general, and ultimately would have done no favours either for those critics’ own authorities. 

The decision and the story behind it are largely technical – about the different methodologies used to measure residents’ perceptions of their councils’ performance – which is perhaps why it has received less attention than it deserves as one of the more important current developments in our local government world. It stems from the Coalition’s move away from central targets and assessments – generally welcomed but also coinciding with local authorities having to operate on ever tighter budgets.

The biennial Best Value User Satisfaction/Place surveys undertaken separately but coordinatedly by all English authorities between 2000 and 2008, coupled with the accompanying Ipsos MORI analyses, provided better information on the user’s perspective of council services than was available in Comprehensive Performance Assessments, and also allowed robust comparisons of perception-based performance indicators (PIs) across authorities.

Then in 2010 all this infrastructure was swept away. Now, supplanting CPA’s successor, Comprehensive Area Assessment, we have sector-led improvement and peer challenge, and, seeking to fill the gap left by the scrapping of the Place Survey, there is Local Government Inform (LG Inform) – a new online LGA service intended to give local authorities and eventually the public easy access to resident satisfaction data about councils and their areas, and to enable comparisons with other councils.

The comparison part is crucial. A resuscitated BV-style centrally driven survey is out, on both political and financial grounds. But some standardisation of methodologies and questions, as formerly ensured by the DCLG, is clearly necessary. The LGA and London Councils therefore commissioned Ipsos MORI to undertake a review and develop a set of questions – on residents’ satisfaction and their views of crime and community cohesion – which, as with the BVPI questions, councils could slot into their own local surveys, thereby producing a sufficiently consistent and methodologically robust subset of data for comparative and benchmarking purposes.

The review was a useful document, explaining and illustrating the key issues of data collection methods, sampling and question design with welcome clarity. Its core was naturally the presentation of the set of 12 recommended questions and advice on their usage and analysis, but the preceding technical review also contained plenty of useful dos and don’ts.

The questions were divided into three tiers: a core benchmarking set, which should be a priority for all councils, worded identically, and ideally opening the survey; a second tier, also recommended for benchmarking, and a likely priority for most councils; and a third tier of more detailed questions, of interest to probably only some councils. The three core and three second-tier questions are:

  • Overall, how satisfied or dissatisfied are you with your local area as a place to live
  • Overall, how satisfied or dissatisfied are you with the way [name of council] runs things?
  • To what extent do you agree or disagree that [name of council] provides value for money?
  • Overall, how well informed do you think [name of council] keeps residents about the  services and benefits it provides?
  • How strongly do you feel you belong to your local area?
  • How safe or unsafe do you feel when outside in your local area after dark? / How safe or unsafe do you feel when outside in your local area during the day?

As every survey researcher will tell you, though, who and how you ask are at least as important as what. Different modes of data collection will produce different responses, even to identically worded questions. For example, satisfaction ratings tend to be higher in face-to-face interviews than in self-completed postal questionnaires, and higher still in volunteer telephone interviews. Asking about satisfaction with the council before a question about value for money will produce higher ratings than the reverse order. Which means that, for benchmarking purposes, comparisons should be limited to results generated by the same methods, or at least methods in which the respondent’s experience is essentially the same.

Statistically, the gold-standard survey design is that used by the early Best Value surveys: face-to-face interviews with random samples of preferably at least 1,000 respondents, drawn from a robust sampling frame – nowadays the Royal Mail’s Postcode Address File – in which every household or person in the target population has an equal, random, and known probability of selection, and results can be generalised to the total population with calculable degrees of confidence.

But, as with Olympic medals, silver and bronze standards are also very acceptable, and, under specified conditions, smaller sample sizes, rigorously drawn quota samples (with face-to-face or telephone interviews), and self-completed postal or online questionnaires (with random samples) may all pass muster for benchmarking purposes.

The fundamental condition, stripped of its details, has already been noted: for inter-authority comparisons and benchmarking, compare only ‘like-for-like’ data, collected by the same method – which means that LG Inform will require detailed reporting of sampling and data gathering methods when authorities come to upload their data.

Which brings us to residents’ and users’ panels – on which Ipsos MORI’s professional advice is unambiguous and emphatic: NO!  In themselves, they’re absolutely tickety-boo. They’re an easy and efficient consultative tool for, say, testing prospective policy initiatives, or tracking attitude changes over time. But, even if panel members are recruited to represent proportionately the council’s population, they will be volunteers, rather than a statistically selected sample, and their responses should not therefore be compared with data systematically collected from another council’s genuinely random survey.

It’s the same point that the Scottish Government was attempting to make last week over same sex marriage. Responses to a consultation exercise, no matter how numerous or passionate, are not the same as the results of statistically representative sample surveys: not worse, or better, simply different.

Understanding residents’ or users’ views and how they compare with those in similar or neighbouring council areas is a vital part of local authority performance management. But cutting corners in order to make such comparisons at precisely the time when the sector is endeavouring to demonstrate its ability to manage and improve itself would be a seriously false economy – maybe not as daft as drug-cheating in the quest of a medal, but still a really, really bad idea.

Chris is a Visiting Lecturer at INLOGOV interested in the politics of local government; local elections, electoral reform and other electoral behaviour; party politics; political  leadership and management; member-officer relations; central-local relations; use of consumer and opinion research in local government; the modernisation agenda and the implementation of executive local government.