Skip to content
  • Health & care
  • Technology & data

The NHS shouldn’t be using AI…Yet

Medical staff working in a clinical setting
Artificial intelligence could transform our health service, but we need to act now so it can effectively harness its benefits

We’ve heard a lot about the potential Artificial Intelligence (AI) has to revolutionise different sectors, including healthcare. These potential benefits haven’t gone unnoticed by the new government. Labour’s Manifesto made modernising the NHS a top priority, including investment earmarked for AI-powered diagnostic services to find medical problems in patients more quickly. 

However, in its current condition the NHS is simply not ready to use AI in a way that produces meaningful outcomes. And if we don’t address this before putting in place solutions, we’ll end up damaging services, not improving them. 

This was highlighted in a recent study by Stanford University. In this trial, AI showed potential benefits in dermatology by matching or surpassing the diagnostic accuracy of dermatologists, but its effective integration required substantial oversight and unbiased data to be used effectively. In summary, there’s no ‘plug in and play’ solution when it comes to implementing AI, and large amounts of skills and information are needed from users for it to succeed.

Effective data gathering and management

To ensure it produces the right outcomes, AI needs vast, specific and accurate data so it can be trained to address the challenges it is being programmed to fix. 

Gathering and sharing this data is a major challenge for the NHS. Trusts and services will often either not have the digital capabilities to collect this information, or will be using different systems to collate and analyse it, making the data difficult to collate and share.  At the same time, fixing these issues and putting in place the right systems will be a costly and time-consuming process if not done correctly.

For effective data transfer and AI implementation across the NHS, Trusts need access to the correct data solutions, while disparate systems and technologies must be made compatible. It won’t be easy, but to achieve this the government needs to work with services across the country to agree what technologies should be installed and create standardised communication methods and data formats. By agreeing and implementing a single, unified approach, we can guarantee that new AI systems have the right data to work effectively and can be put in place across the health service. 

Building trust and working with users

Trust is key to ensuring patients act upon the medical advice and guidance they receive and for AI to make the impact we want it to, it will need to have the support of people.

Some patients may be sceptical of automated systems making health-related decisions for them. This is understandable, we are all used to speaking to and being treated by doctors, many of whom we’ve been visiting for years. If some of their tasks, such as diagnosing certain conditions are suddenly being done by machines, there will be some concerns. 

If and when AI rollout increases, building and maintaining people’s belief these tools are working in their interest should be high up on the government and NHS’s priority list. We have to make sure patients are engaged throughout the development and implementation process. By speaking and working with people to understand where they have concerns, we can work to either reassure them or act upon these so that when the tools are up and running, adoption will be greater and the benefits of these felt quicker. 

There also needs to be accountability and transparency so that patients, as well as healthcare professionals, know that if anything does go wrong it won’t be ignored. The government should look to put in place robust legal processes so that in the case of any AI failures or misdiagnoses those responsible are held to account, and also ensure they communicate this legislation clearly to staff and patients so they know that a safety net is there.

Protecting people and their data

As we’ve already discussed, AI needs data to be effective and in the case of healthcare, this means access to people’s personal and sensitive information. As these technologies are put in place, the NHS will need to make certain that data is being used ethically and in accordance with laws such as GDPR. If it doesn’t, the health service and Trusts risk facing heavy fines and reputational damage that will be hard to recover from.

To help remain compliant, services should be putting in place the appropriate safeguards and cybersecurity tools so that there is a robust defence built into these technologies. At the same time, those using these platforms need to be taught how to use patient data in line with national and international regulations. Working with digital partners who understand and can guide services on creating secure platforms and training staff will be a huge support as we look to grow AI usage in the NHS.

Standards, responsibility and accountability

What underpins a lot of what we’ve covered here is the need for the responsible and moral use of AI in healthcare. Everyone involved in implementing these technologies needs to make sure everything they do is upheld by a strict set of principles that puts people’s safety, rights and wellbeing first. This is something that we’ve recognised at TPXimpact, and to ensure our work is both responsible and ethical we’ve created three principles that underpin any AI project we undertake:

  • Appropriate - AI needs to be suitable for its intended purpose while aligning with societal, legal and ethical norms. Ensuring appropriateness involves a holistic approach, from selecting the right type and source of data to fine-tuning systems so that their output meets the defined objectives without introducing bias or harm.

  • Safe - AI must pose no threat to society, individuals, or the environment. This principle requires stringent testing, validation, and monitoring processes throughout the AI system's lifecycle. The goal is to identify and mitigate risks, ensuring that the AI operates within acceptable reliability and safety parameters even in cases of unexpected input or circumstances.

  • Controlled - We must maintain human oversight over AI systems, ensuring that there are mechanisms for human intervention and that decisions made by the AI can be understood and modified by human operators. Following this principle requires setting clear governance structures and operating guidelines, including audit trails, contestability processes and transparency in decision-making processes.

The integration of AI into the NHS presents an exciting opportunity to revolutionise how we deliver healthcare. But before we get to this point, we need to lay the foundations so that the health service has the information, skills and capabilities to use these tools effectively.

Effective data gathering, building trust among patients and healthcare professionals, and protecting sensitive information are crucial challenges that must be addressed. By prioritising these areas and fostering collaboration between the government, providers, and digital partners, the NHS can harness the technology’s full potential, leading to improved outcomes for staff and patients across the UK.

Karl Houlding's avatar

Karl Houlding

Senior Partner - Healthcare

Contact Karl

Our recent insights

Transformation is for everyone. We love sharing our thoughts, approaches, learning and research all gained from the work we do.

Radical Leaders The Game!

A game-changing approach to leadership

Radical Leaders: The Game! uses real-world crisis scenarios to challenge local government leaders, fostering collaboration, agility, & community focus.

Transforming archiving through artificial intelligence

How AI can turn archives into living resources that shape the future while preserving the past.

Solutions for housing’s biggest challenges

We brought together industry leaders to discuss how collaboration, digital and data can build a brighter future for housing.