Skip to content
  • Central government
  • Technology
  • Digital transformation

From planning to people: how the public sector can get AI ready

From planning to people: How the public sector can get AI ready

by Jay Bangle

The government has launched a new Incubator for AI and readiness should be an important consideration to ensure its success.

The government has been taking decisive action to position Britain at the forefront of the artificial intelligence (AI) revolution and has taken things one step further this week, launching the Incubator for AI (i.AI). This new unit will help government departments harness the potential of AI to improve lives and the delivery of public services. 

For the public sector, the implementation of AI could create billions of pounds worth of productivity savings and free up valuable time for frontline staff across the country, such as nurses and teachers, helping them to improve outcomes. These potential benefits were also highlighted in an update to the Treasury’s Public Sector Productivity Programme.

But a crucial part of any AI strategy will be considering how to implement new tools in a safe and responsible way. Putting in place solutions without a clear purpose or objective will lead to time and money being wasted. Or, even worse, will negatively impact trust, as well as the people and places using crucial services. 

Before implementation, there is an important step the i.AI and government departments must take - ensuring they’re ready for AI. 

Fix problems, go faster

Those in the public sector that are looking to implement AI must first have an agreed strategy in place for how the technology will be used. They need to understand why AI is needed and the risks of ignoring its potential. The i.AI can help departments by ensuring the objectives and outcomes of any AI’s use are clear and that they align with the broader departmental strategy by identifying clear use-cases and running pilot projects before full implementation. Is the goal to take the burden of certain tasks away from staff? Increase output? Expand a service ? These questions should be asked and answered before any systems are implemented.

Be appropriate, safe and controlled

The legal and ethical ramifications of AI have been well discussed for some time and especially since the emergence of tools such as ChatGPT. It’s vital that public bodies know how to use AI responsibly and in a way that keeps people and information safe. Public sector organisations require an understanding of the ethical, legal, policy and social issues we could face when implementing AI, putting in place a clear and robust responsible AI framework. 

Make sure people come first

AI can have implications for a department’s workforce. For example, the skills needed to operate AI solutions may not be in place, meaning these tools can’t be used effectively or safely. Some staff may also fear these technologies will come in and replace them, damaging individual and team morale.

The i.AI can help public bodies prepare for this potential upheaval. They must support them to build and grow teams which have expertise in areas such as software, data, platforms, cyber, AI ethics, and business strategy. At the same time, they can establish plans to manage the organisational structures that come with new AI solutions, ensuring everyone, from staff to stakeholders, has a clear idea of the plan, policies and how it will be used. 

Put data at the heart of decision making

Good quality data is important in ensuring AI platforms work as intended and their use can be expanded. As such, the i.AI, in collaboration with the central digital and data office (CDDO), can help departments create comprehensive strategies to input, collect, manage, and utilise data effectively. This will allow departments to make informed, long-term decisions on whether to scale up platforms or to implement wider AI initiatives.

The journey doesn’t end at implementation

Since the public launch of ChatGPT just last year, AI has rapidly developed, with both current and new systems providing services that were unthinkable just a short time ago. This means that continuous learning, training and evaluation will all be key to ensuring these tools are a success. As we’ve already seen, AI solutions are not stagnant and public bodies must have the tools in place to adapt with them. The performance of such systems also need to be constantly monitored, so that issues are spotted and addressed before they create impact. Responsible frameworks need to cover not just the implementation but the entire lifecycle of an AI system.

Checking AI readiness

With so many things to consider when it comes to seeing how prepared departments are, we at TPXimpact have developed an AI Readiness Framework. This allows the i.AI and public sector organisations to gauge their capacity to harness AI, while also identifying key areas for improvement. It enables them to maximise the benefits of these technologies through targeted analysis and give them the confidence they need to put in place the right solutions so that they produce the benefits they need.

The government understands the important role AI will play, both now and in the future. This has been highlighted by the launch of the i.AI, which is an important step to ensuring these tools are used safely and effectively. However, focussing on implementation alone is not enough and will leave departments at risk with solutions that are not strategic, cost-effective or beneficial. Before taking this step, the i.AI can play a key role in helping make sure the public sector is ready for the AI revolution.

If you're interested in finding out more about our AI Readiness Framework, please contact Jay Bangle.

Jay Bangle's avatar

Jay Bangle

Chief Technology Innovation Officer

Jay leads all technology and engineering delivery at TPXimpact. He helps organisations improve their services, experiences and outcomes, including collaborating with senior officials to navigate the opportunities and ethical implications of AI.

Contact Jay

Our recent insights

Transformation is for everyone. We love sharing our thoughts, approaches, learning and research all gained from the work we do.

LCNC

Harnessing low-code/no-code in the public sector

How low-code/no-code solutions could create better services for everyone

Rethinking cybersecurity in the age of generative AI

Generative AI offers efficiency but poses unique cybersecurity risks. Traditional measures fall short; a new paradigm is needed

Six ways to be bolder with data and transform public services

How the public sector can supercharge its data processes, to transform public services.