Why is the NHS slow to introduce technology?

Why is the NHS slow to introduce technology?

One of the greatest challenges of our times is an ageing population. At its heart lies a dark irony. We live longer because of the success of modern medicine. Because we live longer, our healthcare infrastructure is crumbling under the pressure.

Thankfully, another defining characteristic of our times is an unprecedented acceleration of technological development. We just might be able to invent our way out of this. That’s the thinking behind the newest NHS organisation-in-the-making with a slick name, NHSX, and countless other pilots and trials going on around the NHS. To use technology to simultaneously cut costs and improve patient outcomes. Artificial Intelligence to detect who’s likely to skip an appointment. Faster diagnosis of skin lesions thanks to teledermatology. The use of wearables to improve the management of epilepsy.

But progress is slow. Technology moves far faster than the health system. Health Secretary Matt Hancock’s frustration at the lack of genetic screening for cancer on the NHS is understandable. He sees an available technology that could help save lives, and questions why access is restricted to those willing – and able – to pay for it privately.

But the health professionals who responded to Hancock’s protestations are right. Genetic screening isn’t ready. It isn’t any more accurate than current methods. It only serves to panic people unnecessarily. For now.

Before the NHS invests in new technology, it must be thoroughly evaluated to ensure it’s worth it. Otherwise it becomes victim to that other eternal criticism – wasting taxpayers’ money.

Is there any way we could speed up the process? Satisfy impatient patients and cautious clinicians both?

Implement to evaluate

A prime example of the tension between public appetite and clinical rigour is the Cancer Drugs Fund (CDF) – the pot of money used to procure new and relatively untested cancer drugs.

It’s seen by many as a lifeline. The opportunity to attempt experimental treatments when those available on the NHS have failed. But it’s not without controversy. When it was first introduced, clinicians complained that money was being wasted on drugs for which there was little evidence of efficacy.

Eventually, it was scaled back due to overspending (any spending without a robust evidence base is arguably ‘overspending’) a decision met with formidable public resistance. To onlookers, it looked as if the NHS was simply leaving cancer patients to die.

A happy medium has now been reached, whereby the CDF acts concurrently as early access for patients and evaluation. Astonishingly, in its early years, no attempts were made to evaluate the success (or failure) of the drugs procured. Now data is collected so that medication found to be effective on the CDF can be rolled out to the wider NHS.

The CDF is a flag-bearer for early access acting as evaluation. Patients needn’t be denied innovations while additional trials are conducted. Their treatment is the trial.

The pitfalls of early access

However, implementation-as-evaluation isn’t without obstacles. Another of Hancock’s favoured schemes – GP at Hand – has found similar favour with the public amid concern from clinicians.

GP at Hand is a digital GP practice. Based in London, anybody living within 40 minutes of one of their 5 clinics, or working in TfL zones 1-3, can register. Hancock is a patient. It uses a combination of video appointments and AI to treat patients 24/7, often providing patients with an appointment within minutes of a request. As might be expected, it’s especially popular amongst the young.

GPs have protested its existence for a number of reasons. There’s concern about AI taking their jobs. There are worries about clinical safety. What might a doctor miss within the confines of a video call? What might the AI not think to ask?

There’s anxiety over what it might do to funding for nearby practices. With all the young, healthy patients taken by GP at Hand, non-digital neighbours are left with fewer, older patients – with correspondingly lower funding – who tend to be more difficult and more expensive to treat.

The melange of uneasiness at play here brings the complexities of the NHS into stark relief. How new technology is implemented within the traditional corridors of the health service is as important as the treatment itself. There are genuine clinical concerns about GP at Hand, but the more mundane procedural elements carry at least as much weight.

That doesn’t mean it’s not a good idea in principle. Successful teledermatology trials have shown that video appointments certainly can work – both in cutting costs and improving patient experience.

Although an independent evaluation is due out soon, GP at Hand is a clear case of under-evaluation, at least initially. Technology introduced without enough of a plan to assess its effectiveness.

The evolution of evaluation

What should be done, then? Do we make patients wait while we run arduous, long-running trials to assess not just the technology, but how it’s implemented? Accept that people may die in the interim? Or do we introduce early and accept the imperfections of implementation (including that people might die in the process), adjusting as we go?

The answer, as ever, lies somewhere in between. In our experience running healthcare technology evaluations – most recently the nationwide NHS Test Beds initiative – we’ve found that a decisive but nimble approach is required. Technology is evolving at speed; evaluation methods must evolve in tandem.

The key characteristics of a modern evaluation should be:

Be pragmatic

Yes, a Randomised Control Trial is the gold standard. But it’s impractical in cases like these, where the intervention changes during its implementation, as it should. The technology is often so new we simply can’t predict all its use cases beforehand.

Gather as much data as possible

By the same token, we won’t know for certain what data will become most relevant until the trial is underway. That means using multiple, complementary research tools: patient self-reporting, clinical data, public surveys and detailed patient interviews.

Engage stakeholders from the beginning

Work with stakeholders from the outset. Find out exactly what each party needs to know e.g. cost-perpatient, A&E attendances, or a clinical outcome. Involve clinicians and patients in designing the evaluation. They may know more about what’s important than you do.

Be flexible

Be prepared to change approach as the implementation progresses. Unanticipated issues may arise. Areas you expected to be significant may turn out to be unimportant. New technology evaluations can unearth as much as they analyse.

Context is everything

What works in one context may not work in another, and vice versa. Failure of a trial does not mean failure of the technology. How you interpret the data is crucial to the entire evaluation. Involving clinicians and patients can be invaluable, as they often provide an alternative perspective that may even be more relevant to the situation-at-hand.

With a more agile approach, we can get new technology to patients faster, while still conducting robust evaluations that ensure both clinical efficacy and value for money.

It’s not the building of an evidence base that slows implementation. It’s the way we build that evidence.

Why is the NHS slow to introduce new technology? - Full report