Earlier this year a report by the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA) entitled ‘Managing Demand: Building Future Public Services’ sought to bring ‘clarity to the concept’ of demand management. Yet, upon reading the paper it is clear that it launches into several variations of well-reversed, warm-worded but wrong-headed themes from active citizenry, ‘co-production’ to ‘nudge’ and ‘networks’. That’s a pity as the term itself is relatively uncontested with the focus being on internal cost-efficiency.
Of course, a home truth is that demand management in reality is often made-up of three components. First, it represents a euphemism for restricting access to services. Second, it is used to encourage the growth of self-service approaches to public provision including service automation, colloquially known as digital-by-design. The third element of usage is behavioural change to reduce demand through ‘changing citizen expectations’.
The authors state that demand management is ‘an area of emerging thinking’ and not core work in many local authorities. Yet, they neglect to acknowledge that in practice demand management leads directly to rationing via eligibility criteria and the like, reducing the need for direct provision and pushing self-management back to the service-user. They conflate two approaches – studying customer demand and designing against it with behaviour change. Behavioural insight strategies assume interventions need to take place on the individual rather than the service. Consequently, approaches such as ‘Nudge’ seek to solve the wrong problem.
In service organisations, demand can only ever be understood as person or customer derived. For example, you cannot, as the paper asserts, successfully address ‘failure demand’ – a form of system waste – through prodding people to change their behaviour in response to services that are not designed to work from their perspective. How local public service organisations understand and respond to this demand is what counts. And on that, the paper is both incoherent and lacks detail.
Benchmarking has some merits in demonstrating ‘big-picture’ cost comparisons. But it is poor at understanding context, value and total costs. Moreover, it can’t provide the means to understand and improve performance. With benchmarking, it’s important to know what you are comparing; and if what you are comparing is actually comparable at a finer level of analysis e.g. disease conditions. Otherwise it’s like contrasting apples and pears.
Take the argument around using Dr Foster’s standardised measuring of mortality. Professor Nick Black was asked to look into mortality rates following a review by Sir Bruce Keogh in July 2013 found failings in care at 14 hospitals with the highest death rates. Professor Black argues that the two principal mortality measures are not an accurate indicator of poor care and should be ignored. Death statistics as influences of hospital care quality is a very ‘weak signal’ at best or a ‘distraction’.
One of these is Hospital Standardised Mortality Rates (HSMR). HSMR looks at the expected rate of death with actual rates. HSMR is a very dubious overall measurement of mortality – the numbers being influenced by the way hospitals collect data, changes in coding can alter mortality statistics. HSMRs do not take account of factors including the availability of hospice care – less hospice care is likely to lead to more people likely to die in hospital but be no reflection on the quality of acute care.
This can result in misunderstanding data e.g. Royal Bolton Hospital following a Dr Foster benchmarked audit is a case-in-point. It is worth a listen: BBC Radio 4 File on 4 Programme