hamish dibley

Home » Posts tagged 'Best practice'

Tag Archives: Best practice

‘Back-to-front’ thinking: right care, wrong approach

Back to Front

I recently attended an intriguing presentation on NHS Right Care. Right Care is an approach to improvement that affords health commissioners with a way to substantially improve ‘health outcomes, value and financial sustainability’. The approach provides the methodological underpinnings to the Commissioning for Value programme which is about identifying priority programmes to offer the best opportunities to improve healthcare. The work is promoted as having a ‘compelling economic narrative that creates a national benchmark and peer comparison’ and that it should be ‘business as usual’. It was this acclaim that got me thinking about the right way to study to obtain good care and the role of standard improvement tools.

A common error with many analytical models is to draw linkages between correlation and causation or indeed assert causality as a consequence of data analysis. Results from quantitative data analysis require empirical validation in real-world conditions. Data analysis asks ‘what’ questions that needs to be linked to pursuit of ‘why’ to authenticate findings. For example, quantitative datasets such as those captured by acute hospital trusts and GP practice data will only tell us ‘what’ is happening. You need other techniques that will reveal why. For example, the purpose of more qualitative methods is to understand ‘why’ and ‘how and where’ to improve. You cannot improve with confidence solely on the basis of ‘what’ findings.

As for the Right Care Methodology I believe its premise is wrong. It represents what I call ‘back-to-front’ thinking with the emphasis being on activity and costs. Essentially reductionist by design (not systematic) and a silo focus on pathways and prizing indicative over empirical evidence. From what I could tell listening to a presentation about the approach and reading the material, Right Care relies on standardised benchmarking and peer-to-peer comparisons. Both approaches have distinct limitations in terms of understanding and identifying performance issues. Indeed, the resulting ‘prioritisation of ideas’ relies on indicative costs which I would suggest lacks rigour and robustness.

The approach relies heavily on benchmarking as a tool for performance improvement. Yet, as I have already blogged about, it is important to recognise the limitations inherent with benchmarking. As an improvement tool, it is only as meaningful as those you are measuring against. Moreover, caution is to be exercised where current performance is significantly better than average but falls short of either Clinical Commissioning Group (CCG) or provider performance ambitions.

Indeed, benchmarking has merits in demonstrating ‘big-picture’ cost comparisons. But is poor at understanding context, value and total costs and should not be used in isolation to either understand or improve service performance. For example, whilst some indicators would imply positive performance around a specialty, local clinical intelligence offers a different story. Furthermore, there may be other indicators that a CCG or provider would like to consider itself against, other than the nationally available data.

Moreover, it cannot provide the means to understand and improve performance. With benchmarking, it’s important to know what you are comparing. And if what you are comparing is actually comparable. I am of the evidential opinion that the only benchmarking and best practice you should do should be within your own organisation, complimented and cross-referenced by other more robust techniques to achieve more comprehensive understanding and analysis.

We need to start moving beyond benchmarking and standardised pathways (for me Right Care is about perceived pathway efficiency not about patients – that term that is hardly ever used and certainly wasn’t in presentation I attended) towards consideration of models of care tailored to patient cohorts founded upon comprehensive research and analysis – both quantitative and qualitative in origin. Instead of obsessing about activity numbers and financial costs, we need to think about purpose and process. Systems and processes determine service effectiveness and cost efficiency. The purpose of any service comes from the people/users/patients/customers. If you improve the process based on the purpose, better outcomes and cost savings follow. That’s what I mean by ‘Front-to-Back Thinking’. Off the back of this you can then engage in what I call ‘intelligent system and service redesign’. I’ll on this theme at a future time.