I’ve been working with the Department for Energy Security and Net Zero on a new service which is all about a new policy area designed to grow and expand the use of heat networks (sometimes also called district heat networks) in England.
The main challenge we’ve faced throughout the work is that the policy is still in development, going through a standard series of versions, before being written into law. Of course this evolution is simply the nature of the legislative process. It’s how all laws are shaped before they become final.
This has opened up some brilliant opportunities to contribute to policy development. But, it’s also meant updating our service content in line with these changes – as policymakers finalise the legislation.
This is where AI (and my amazing colleague and AI engineer, Nichole Drury), come in. Large Language Models (LLMs) are typically being thought about as tools for creating content, but we wanted to explore the opportunity of using AI for maintaining content instead.
Applying AI to content maintenance
Content maintenance is all about what happens to content after it’s published.
This involves things like checking factual accuracy – is the content still correct and up-to-date? Does plain english policy information still match the legislation? All too often, time and focus goes on getting content live and published. Especially when organisations are busy and have limited time and resources for content.
So could AI help us to maintain content?
We wanted to take a hypothesis-based approach to this:
“Aligning our content to evolving policy is time consuming and requires considerable manual, repetitive processes. Therefore, implementing AI-driven tools in content design maintenance will enhance how we can stay aligned with evolving legislative requirements. And, save time and make us more efficient in how we maintain content, ensuring it’s up-to-date.”
We also agreed key success measures would be:
-
reduction in the time spent identifying legislative updates
-
reduction in time needed to assess content impact
-
faster content update turnaround time
To test this hypothesis required a new workflow in the context of AI. Importantly for me as a content designer, this is where we could see both the human and AI steps needing to work together to achieve our outcomes.
Our workflow before AI meant we had to review and understand large pieces of information (new secondary legislation) – going through each new draft to look for differences in service content manually. This could typically take us 5 to 7 days.
Our new workflow with AI used a Large Language Model (LLM) to show these differences, comparing current content drafts with the new legislation, potentially only taking a few seconds. Once the AI had picked out the differences, it meant that we could evaluate this ourselves – taking approximately 1 day.
With both workflows, we updated the content and completed further subject matter expert fact-checks (talking approximately 5 to 7 days), before a final content approval stage at the end of the process. These steps remained in place, both with and without the use of AI.
The result: How effective was AI?
Did the use of AI save time and make us more efficient? The short answer is yes.
Previously, when a new version of the legislation was provided, we’d spend a large amount of time manually reading and mapping the changes on an interactive Whiteboard tool. However, when we were able to use the LLM to compare drafts and summarise changes, this put us on the front foot, and transformed manual tasks into being more focused around validation of the information the LLM provided.
Other timesaving benefits were actually very human, something that I hadn’t expected to find. These included:
-
making SME input more efficient – by the LLM being able to identify the main changes so quickly, this meant I had an almost immediate sense of which humans (subject matter experts) I’d need to engage with on page iterations
-
prioritising people’s time – I was therefore able to plan ahead and book or reserve time with the SMEs I needed to speak to. I’m sure we’ve all faced battles with people’s diaries, and this was therefore a big unexpected benefit
-
making moving between tasks quicker and easier – on average, people take nine and a half minutes to get back into a productive workflow after switching, according to a joint report by Qatalog and Cornell University’s Idea Lab. Whereas the original, fully manual task had to be completed between usual meetings and other priorities, the LLM’s speed meant I could dive straight into understanding any updates, and was then able to start considering actual content iterations more quickly
The importance of having the right checks and balances in place
An important part of why I think AI has improved the content design workflow here, is that we worked hard to ensure the right processes, ways of working – and importantly, content approval processes were in place beforehand. This included defining and embedding approval processes from both programme experts and communications approvers in policy teams.
Importantly, all pages where we have utilised input from an LLM still go through an approval process. By one, or sometimes multiple, humans.
It’s also important to recognise the parts of our content process that cannot be replaced by artificial intelligence. For example, we still have a monthly content working group in place, so that the key stakeholders involved in these processes can meet and collaborate in real time. Something we would never want AI to replace.
In our use case, the value of AI has been its ability to process data almost instantly. And, assist the editor (me) as a cognitive partner. It assists in automating the text analysis, but leaves the final content editing and judgement to the human.
Final thoughts and reflections
Nichole and I have concluded this work feeling excited about this new, emerging capability. Because our use case hasn’t been about using generative AI to create new content (hello, em dashes). This has meant we’ve prevented the kinds of resulting issues which are common with generative AI. For example, problems with ‘hallucinations’, where AI creates information which seems plausible, but is actually inaccurate.
Instead, we’ve used it to help process new drafts of legislation, and to spot differences. And as Nichole explains, “this isn’t foolproof or perfect, but neither are humans.” And that’s where we’ve really seen the value of our content design strategy, ensuring we’re never relying on any AI output alone.
This work shows how AI can be used successfully in the right contexts. And for content design work, become an asset. Not a liability.
More from our design blog
Transformation is for everyone. We love sharing our thoughts, approaches, learning and research all gained from the work we do.
-
-
What good design documentation looks like
Read blog post -
-
Naming services in complex situations
Read blog post