hamish dibley

Home » Posts tagged 'Performance Improvement'

Tag Archives: Performance Improvement

Lop-sided logic: A&E and the 4-hour waiting time target

A&E Performance and the 4 Hour Target

Broadcast and newspaper headlines in the past couple of months have all been about pressures on A&E services. As ever with whirlwind media storms they typically tend to blow out before reasoned conclusions can be drawn.

Which leaves the question, what really does bedevil A&E? Is the problem facing the front-end of acute hospitals down to underfunding of emergency medicine; the ‘downgrading’ or closure of A&E units; fragmented and unresponsive primary care services; NHS 111 (more on this in a future blog); the payment or ‘tariff’ system; unprecedented patient demand; blockages including static or declining bed capacity and delays in transferring patients out of hospital or all of the above? 

The direction of NHS England is to ‘centralise’ or ‘standardise’ emergency provision around fewer locations and downgrading others into Minor Injury Units (MIU) or Urgent Care Centres (UCC). To critics this move is one which creates a ‘two-tier’ system. The logic is an economy of scale one and follows where earlier efforts in stroke and heart care have lead; namely centralise medical expertise and services in order to improve outcomes.

Two things are for sure – we shouldn’t expect ‘healthcare professionals’, code in this case for paramedics, doctors and nurses to work ‘harder’, they already are. Nor should we condemn people for using A&E services inappropriately in whatever numbers. Clifford Mann, the President of the Royal College of Emergency Medicine was absolutely right when he said last month: “I don’t think we should blame people for going to the emergency department when we (the system) told them to go there. It’s absurd.”

It is true that over time, we have both incrementally and intentionally designed a reactive, fire-fighting, hospital focused, medical health system, with care very much an after-thought. Hospitals declaring they can’t cope through issuing ‘major incident’ alerts is the predictable consequence of some pretty foolhardy thinking. One such example is the 4-hour waiting time target in A&E. Ludicrously this activity measure is regarded by many as both a good indicator of A&E performance and quality. It is nothing of the sort.

The 4-hour waiting time target is a nationally-derived arbitrary indicator. The NHS in England (or more appropriately individual NHS Trusts) has missed its four-hour A&E waiting time target with performance dropping to its lowest level for a decade. Figures show that from October to December 2014, 92.6% of patients were seen in four hours – below the 95% target. The performance is the worst quarterly result since the target was introduced in 2004. Viewed a different way, 9 out of 10 people who go to A&E get ‘seen and treated’, discharged or admitted within the 4-hour window. Interestingly, whether we actually solved their problem is not measured and consequently never recorded.

The waiting time target’s reductionist logic is simple. It is an arbitrary indicator which is set from outside the immediate organisation (hospital) or business unit (A&E). The hospital and A&E is then expected to achieve the target come hell or high-water, through greater effort by its employees. 

The reason this happens is because there is a mistaken belief that it is people themselves who are the limiting constraint on performance within organisations – they need to work harder or refrain from using services for reasons that are deemed ‘inappropriate’. The reality is that people’s performance or usage of a particular service is a consequence of the influence of other parts of the system, of which they form a part – policies, processes, procedures, systems, management. 

To use an analogy, if a person enters a 10 mile race, in the knowledge that they can only run 5, the only way they can finish the race is to ‘game’ or ‘cheat’ and catch a bus after the 5 mile mark. This is what happens in organisations. If the A&E arbitrary target for transferring patients to a ward is 4 hours, but the existing capability (if measured) is only 5, then in effect, the bus is a trolley parked in the hospital corridor. Only now it is alleged, the wheels are coming off. 

The other problem with arbitrary targets is that they are self-limiting. With no knowledge of patient demand and existing capability to meet that demand, the artificial target actually could be well within the range of what is improvement is possible. Paradoxically, setting a target can actually lead to under-achievement, in both people and organisations. 

To compound the problem, along with the deficiencies inherent in setting an arbitrary target, measuring and judging performance at a single static point or over a single period of time is also counter-productive. This ignores the nature of variation that exists in almost everything we do, individually and collectively within an organisation. Here averages distort unless viewed within the context of the overall distribution of performance and the underlying trend, viewed against the demand placed on the system at any given point in time. 

Whilst we can’t remove the 4-hour waiting time (the real limiting constraint), we should treat it as unavoidable system limitation but not drive performance solely on achievement of the arbitrary number. The alternative ‘solution’ is very straightforward – establish more insightful information streams and/or make better use of data in a more operationally meaningful way. Asking useful questions will help understand and in-turn resolve problems: 

  • Before measuring anything, ensure you are measuring the right thing. Is the 4-hour target measuring the right thing? Measure what matters to patients – do we actually know what this is? Is it quick diagnosis, speedy treatment, medical or psychological reassurance or getting help?
  • When it comes to measurement we first have to understand patient demand. Do we empirically know why people choose to use A&E (patient demands) in order to understand how A&E can be better designed to deal with these demands (capability)? Simply ranking demands by their perceived inappropriateness without understanding the patient context doesn’t solve any problem.
  • Measure your existing capability over time and understand the statistical variation that exists, against an understanding of demand.
  • Express what you measure statistically, based on the nature of the distribution of the thing being measured.
  • Unless you intend to change something and if you must set a target, understand what is within your current organisational or business unit capability.
  • If you want to set a target outside your current capability, then identify what you are going to change to achieve this objective.
  • Do not let the setting of a target act as a system constraint in itself.
  • Do not make a business out of it – the aim should be continuous improvement not ranking.

This alternative approach would require us to regularly understand and measure local demand placed on the system (find out from their perspective why people actually come to A&E, don’t assume to know the answers) and the local capability to respond to this demand. Is demand predictable over time? What is the current system capability (staff mix, capability to meet the nature of patient needs and resourcing) to successfully address this demand?

A sophisticated understanding of patient demand and the capability to meet it will provide providers and commissioners with the ‘business intelligence’ they need to have a more effective A&E service. It informs us as to the level and nature of professional expertise required in A&E and when; availability of appropriate test facilities and beds; person-centred processes to effectively meet needs and even how best to approach the design and layout of A&E.

One thing we can forget though: the growing trend to rebrand A&E Emergency Department (ED) – that has zero impact.

Advertisements

‘Back-to-front’ thinking: right care, wrong approach

Back to Front

I recently attended an intriguing presentation on NHS Right Care. Right Care is an approach to improvement that affords health commissioners with a way to substantially improve ‘health outcomes, value and financial sustainability’. The approach provides the methodological underpinnings to the Commissioning for Value programme which is about identifying priority programmes to offer the best opportunities to improve healthcare. The work is promoted as having a ‘compelling economic narrative that creates a national benchmark and peer comparison’ and that it should be ‘business as usual’. It was this acclaim that got me thinking about the right way to study to obtain good care and the role of standard improvement tools.

A common error with many analytical models is to draw linkages between correlation and causation or indeed assert causality as a consequence of data analysis. Results from quantitative data analysis require empirical validation in real-world conditions. Data analysis asks ‘what’ questions that needs to be linked to pursuit of ‘why’ to authenticate findings. For example, quantitative datasets such as those captured by acute hospital trusts and GP practice data will only tell us ‘what’ is happening. You need other techniques that will reveal why. For example, the purpose of more qualitative methods is to understand ‘why’ and ‘how and where’ to improve. You cannot improve with confidence solely on the basis of ‘what’ findings.

As for the Right Care Methodology I believe its premise is wrong. It represents what I call ‘back-to-front’ thinking with the emphasis being on activity and costs. Essentially reductionist by design (not systematic) and a silo focus on pathways and prizing indicative over empirical evidence. From what I could tell listening to a presentation about the approach and reading the material, Right Care relies on standardised benchmarking and peer-to-peer comparisons. Both approaches have distinct limitations in terms of understanding and identifying performance issues. Indeed, the resulting ‘prioritisation of ideas’ relies on indicative costs which I would suggest lacks rigour and robustness.

The approach relies heavily on benchmarking as a tool for performance improvement. Yet, as I have already blogged about, it is important to recognise the limitations inherent with benchmarking. As an improvement tool, it is only as meaningful as those you are measuring against. Moreover, caution is to be exercised where current performance is significantly better than average but falls short of either Clinical Commissioning Group (CCG) or provider performance ambitions.

Indeed, benchmarking has merits in demonstrating ‘big-picture’ cost comparisons. But is poor at understanding context, value and total costs and should not be used in isolation to either understand or improve service performance. For example, whilst some indicators would imply positive performance around a specialty, local clinical intelligence offers a different story. Furthermore, there may be other indicators that a CCG or provider would like to consider itself against, other than the nationally available data.

Moreover, it cannot provide the means to understand and improve performance. With benchmarking, it’s important to know what you are comparing. And if what you are comparing is actually comparable. I am of the evidential opinion that the only benchmarking and best practice you should do should be within your own organisation, complimented and cross-referenced by other more robust techniques to achieve more comprehensive understanding and analysis.

We need to start moving beyond benchmarking and standardised pathways (for me Right Care is about perceived pathway efficiency not about patients – that term that is hardly ever used and certainly wasn’t in presentation I attended) towards consideration of models of care tailored to patient cohorts founded upon comprehensive research and analysis – both quantitative and qualitative in origin. Instead of obsessing about activity numbers and financial costs, we need to think about purpose and process. Systems and processes determine service effectiveness and cost efficiency. The purpose of any service comes from the people/users/patients/customers. If you improve the process based on the purpose, better outcomes and cost savings follow. That’s what I mean by ‘Front-to-Back Thinking’. Off the back of this you can then engage in what I call ‘intelligent system and service redesign’. I’ll on this theme at a future time.