Skip to content
  • Technology
  • Digital transformation

Using large language models to understand a user’s intent

LLM Part 1 Header

by Antony Heljula

The first of a three-part large language model series, we explore how LLMs can simplify chatbot development and improve user experience.

There are various large language models (LLMs) available including ChatGPT, BARD and Cohere.

Anyone who designs or develops a new chatbot will know that their first job is to define what “intents” the chatbot has to support. An intent can be anything from asking a question, submitting expenses or providing feedback.

Typically, we train a chatbot to learn how to understand a user’s intent simply by supplying lots of training examples. For example, for submitting expenses we would provide examples such as:

  • I would like to submit expenses
  • submit expenses
  • I want to submit expenses

Once the user’s intent has been determined, the relevant business process can be initiated:

LLM Part 1 Image 1

This all works fine where the business process (intent) is pretty obvious. It doesn’t take long to train a chatbot to understand the difference between “submit expenses” and “submit timesheets”.

There is a problem however when it comes to intents that are more associated with general chit-chat where users want to express gratitude, provide feedback, make a suggestion or even say something rude:

LLM Part 1 Image 2

In these situations, it is quite a challenge to provide sufficient training examples. How can you list all the ways in which a user can express gratitude or say something insulting?

Sometimes this is so much of a challenge that developers resort to other techniques such as insisting users state the word “Feedback“ at the beginning of any message where they want to provide feedback. Obviously, this is not as desirable.

This is where LLMs can save the day!

Using an API call, we simply pass the user’s question over to a LLM and ask the LLM to determine the user’s intent. This will normally involve “prompt engineering” to make sure the LLM only comes back with one of the allowed responses.

As an example, here’s a prompt you could use to find out if the text “that was helpful” was meant as gratitude, feedback, rudeness or something else:

LLM Part 1 Image 3

And if the user provides feedback such as “It would be useful if I could view my pay slips using this bot”:

LLM Part 1 Image 4

In summary, a simple API call to an LLM can:

  • save many hours of chatbot training and testing
  • improve user experience by instinctively understanding a wide range of user intents

 

This is the first article in a three-part series on large language models (LLMs). Stay tuned in the coming weeks for a look at how LLMs can greatly improve the success of “knowledge” bots and how LLMs can be used to validate a chatbot’s response.

Antony Heljula's avatar

Antony Heljula

Technology Director

Contact Antony

Our recent insights

Transformation is for everyone. We love sharing our thoughts, approaches, learning and research all gained from the work we do.

Thrifting Apps

What government registers can learn from thrifting apps

We explore the parallels between thrifting apps and government registers, and how user-centric design and data quality are key for successful outcomes.

Place and infrastructure: thinking about digital transformation in a new way

Digital transformation is more than gadgets or sensors—it's about the intentional and adaptive design of policy, planning and project delivery to achieve user and economic outcomes.

Unifying NHS digital services

How agile practices and user-first approaches are key to creating joined-up services.