Skip to content
Get in touch
  • Design

Can AI help blood cancer patients find treatment sooner?

Client: Blood Cancer UK

Can AI Help Blood Cancer Patients Find Treatment Sooner
We’ve been working with Blood Cancer UK to explore how clinical teams could use AI to support more people to access clinical trials.

We’ve been working with Blood Cancer UK for the past six months to design and test a significantly scaled-up version of the charity’s clinical trials support service. This is a vital service that supports people in finding potentially life-changing treatment through clinical trials. Working as a blended team with the clinicians who run the service, we have been prototyping and testing a reimagined service offer, including exploring how AI could improve the efficiency and efficacy of the future service in meeting user needs.

We believe in taking a responsible and ethical approach to introducing AI technologies into health services. This is particularly the case when working on a clinical service handling personal data. For us, that means ensuring AI use is:

  1. Appropriate: that it is suitable for its intended use while aligning with societal, legal and ethical norms

  2. Safe: that it poses no threat to society, individuals, or the environment

  3. Controlled: that human oversight is maintained over AI systems. 

Working alongside the Blood Cancer UK team, we didn’t start with AI as a solution. Doing this would likely have resulted in a costly investment in the wrong solution for the problem. Firstly, we used user-centred design to really understand the problem for users and then explored solutions that meet their needs. 

  1. Understanding the problem and developing ideas to test: Our team has been working together to articulate the challenges our clinicians delivering the service face currently.  For some parts of the problem, such as needing to scan large amounts of information, providing non-jargony information and helping with repetitive tasks, AI has emerged as a potential solution. 

  2. Testing with the clinical team: We have developed a demonstrator to test our AI concept and help our clinicians understand how it might feel and work in practice. Our clinicians have been able to use placeholder data to interact with the AI tool, prompting it themselves and comparing what the tool does and finds compared to what they would have done and found.

For our Blood Cancer UK service, our recommendation is that the optimal model is “Expert + AI”- combining the critical judgment, experience, expertise and reassurance provided by the service’s clinical experts with the speed and reach of AI to serve more people.

Questions we’ve been asking as a team as we’ve developed and tested our AI prototype include:

  • What are the benefits of building vs buying off-the-shelf?

  • How much will it cost to develop?

  • How much effort will be required to develop it?

  • What role will clinical experts need to play alongside AI?

  • What’s the right balance between automation and human oversight?

  • Are the necessary skills in place for the team to use the tool, or will training be required?

  • How can the team get confident in and trust the tool’s results?

  • What are the data privacy and security implications?

  • What are the ongoing maintenance needs of this tool?

Our team's reflections on designing with AI

1. The underlying data matters a lot

Consistent data lays the groundwork for the future use of AI. To ensure our AI results are not inconsistent, unreliable or misleading, we will first need to restructure how data is captured and collated through the service.

Alongside consistent data capture, the results of an AI model are driven by the data available to it. This means that when working with AI there is a risk of recreating or amplifying any existing biases in the data. Blood Cancer UK is committed to driving equity of access to clinical trials, particularly for people currently underserved by clinical trials, including those from black and minority ethnic communities, from rural communities and from lower socio-economic backgrounds. In order to achieve this, the team is engaging closely with currently underserved communities to ensure that the data underpinning the AI model doesn't unintentionally reflect systemic disparities.

2. Be really specific about the problem you are trying to solve 

AI technologies cost money. User research and data analysis have given our team clarity on the problem we are trying to solve. This means we are focusing our AI solution to create value where we most need it.

3. Test rigorously with the people who will interact with the AI tool

If you’ve played with AI services in the place of a Google search, you will have experienced the different interaction that is required. For our team delivering the service and working with AI for the first time, testing has opened us up to the new ways of working and new skills required, as well as confirming our willingness to work in this new way. 

4. Not replacing but supporting experts 

Can AI help blood cancer patients find treatment sooner? Based on this work, we think that it can. But there are serious ethical considerations to take into account when designing and developing AI tools for health and public services. For this service, the role of our clinical experts working alongside our AI tool remains central to the design of the future service model.

 

Maia Tarling-Hunter's avatar

Maia Tarling-Hunter

Design Lead, Design Strategy

Contact Maia