Home » Posts tagged 'benchmarking'
Tag Archives: benchmarking
Achieving great service is straightforward if unconventional: give customers/patients/service users what they need. However, convention dictates that to adopt this approach will lead to expensive ‘gold-plating’ of services (quality service but at higher costs).
Instead leaders and organisations seek to follow convention and manage their activities by protecting budgets, imposing access restrictions via criteria or eligibility introducing service level agreements and focusing on efficiency through reducing transactional or unit costs.
Yet, the real paradox is that an explicit focus on managing activities paradoxically increases the very thing organisations seek to reduce – cost. Convention decrees that the answers to problems are already known and pre-prescribed solutions can be delivered through meticulous plans and reports. As a consequence these change approaches fail to deliver in practice.
Take conventional healthcare commissioning in the NHS – a person as a health and/or social care need; their need is assesses and then sooner or (most often) later a service is first commissioned then provided by different professionals. Service level agreement met, project milestone ‘green-lighted’ and ticked.
What happens is because services are not designed around the need(s) of the person/patient/service user, they represent to the service in the vain hope that their need(s) maybe better met. Professionally, the response to this problem is to repeat the process of assess, commission and provide. The outcome experience of the person/patient/service user is that they continue to represent and costs rise. Why does this happen? It is because the mind-set is misguided.
Conventional business change needs to change
Conventional change or improvement relies on a wrong-headed ‘back-to-front’ perspective. What is meant by this phrase is a rear-guard focus on removing cost by reducing activity and expecting patients to change their behaviour. It results in an obsession with activity volumes and ‘bottom-line’ costs. The one problem with this approach is that it doesn’t work. Back-to-front thinking always leads to distortion of organisational performance and higher costs.
The mechanics of conventional change follow a typical path. Invalidated hunches, opinions and/or data consisting of worthless aggregated activity, arbitrary benchmarking and/or cost volumes are used to identify problems.
Understanding the patient or service user (as opposed to non-user public) perspective in this process is rarely, if ever sought. Agreeing appropriate governance arrangements typically loom large at this point and take-up not inconsiderable internal discussion, effort and time.
Much time is consumed completing a litany of project management induced paper-chasing reports such as project initiation documents or PIDs. Once signed-off this document helps formulate a project plan which is established to solve the preconceived problem. A business case is then written which outlines time, costs and predetermined outcomes.
These outcomes are then ‘monitored’ through office-based completion of reporting documents such as highlight reports full with activity and cost volumes with ‘traffic light’ systems – green for good, amber for somewhere in between and red for bad. None of this involves spending time in the work and empirically understanding ‘why’ things are the way they are.
Improvement activity is often relegated to time-limited projects and people who sit outside of the actually work. Disproportionate time and effort is then spent on conducting public consultations (‘the blind leading the blind’) which replace the opportunity to generate empirical knowledge of what is actually happening and causing problems at the sharp-end, in the work.
Performance metrics derived at the business case stage tend only to measure activity on whether a project is completed ‘on time’ and ‘to cost’. Little regard is made of how much operational improvement is achieved in resolving the real problem(s).
‘Off-the-shelf’, standardised solutions are mandated that usually involve automation or greater use of technology (sometimes referred to as ‘channel shift’ or ‘digital-by-default’); sharing or outsourcing services; restructuring to establish new ‘target operating models’; rationing buildings and service provision; charging and trading services and/or reducing staff numbers. Here abstract cost-benefit equations are the order of the day.
Unless consultants are engaged, ‘delivery’, ‘execution’ or ‘implementation’ is then solely contracted out to frontline staff who are left to try and make the problem fit the predetermined standardised solution(s). Benefits realisation plans are written remotely and focus on the completion of activity tasks not performance improvement.
Consequently, ‘change’ seeks to solve symptoms not address root-causes. The predictable result of this way of approaching improvement are failing projects, higher costs and poorer service as experienced by patients.
Tomorrow I will outline the more intelligent way to conceive of how to change for better and undertake meaningful performance improvement.
Benchmarking has some merits in demonstrating ‘big-picture’ cost comparisons. But it is poor at understanding context, value and total costs. Moreover, it can’t provide the means to understand and improve performance. With benchmarking, it’s important to know what you are comparing; and if what you are comparing is actually comparable at a finer level of analysis e.g. disease conditions. Otherwise it’s like contrasting apples and pears.
Take the argument around using Dr Foster’s standardised measuring of mortality. Professor Nick Black was asked to look into mortality rates following a review by Sir Bruce Keogh in July 2013 found failings in care at 14 hospitals with the highest death rates. Professor Black argues that the two principal mortality measures are not an accurate indicator of poor care and should be ignored. Death statistics as influences of hospital care quality is a very ‘weak signal’ at best or a ‘distraction’.
One of these is Hospital Standardised Mortality Rates (HSMR). HSMR looks at the expected rate of death with actual rates. HSMR is a very dubious overall measurement of mortality – the numbers being influenced by the way hospitals collect data, changes in coding can alter mortality statistics. HSMRs do not take account of factors including the availability of hospice care – less hospice care is likely to lead to more people likely to die in hospital but be no reflection on the quality of acute care.
This can result in misunderstanding data e.g. Royal Bolton Hospital following a Dr Foster benchmarked audit is a case-in-point. It is worth a listen: BBC Radio 4 File on 4 Programme