Whilst conversational applications might seem easy to digest in user-facing mode, there’s a whole load of technical jargon underpinning their operation. Much like when we jargon busted RPA, here’s a collection of explainers around terms we believe to be the most important when trying to understand chatbots.
Conversational applications
The technologies which underpin conversational applications include artificial intelligence (AI), and/or user experience (UX) principles. These components are what engineer human like conversations between users and devices.
What is Natural Language Processing?
Natural Language Processing (NLP) is when a device breaks down and discerns the meaning of either spoken or written prose. NLP examines users’ utterances, extracting the intent and the entities.
By and large, conversational applications interact with a user through the exchange of messages. We call these messages “utterances”. Each utterance carries one, or multiple, “intent(s)”. You can think of intent as the purpose a set of words carry.
Let’s break this down further…
An utterance is anything a user says or types. If a user asked “what’s the weather forecast in London tomorrow?”, the whole sentence would be the utterance.
An intent is what the user wants to achieve from their utterances. If they’re asking what the weather forecast will be in London tomorrow, the intent would be the weather forecast specifically for that day, in that location.
Entities refer to the different words in an utterance. So, in the example above, the intents are ‘tomorrow’, ‘weather’, and ‘London’. Entities attach explainers to these intents.
Broadcasting, channels and integrations
When a conversational interface broadcasts, it is sending a message proactively. The best example is when you land on a website, and a chatbot proactively asks you how it can help? This is also known as a ‘subscription message’. You can think of a broadcast as the equivalent to a push message in a mobile app.
Channels denote the places you find chatbots, including widgets on websites, WhatsApp, Facebook Messenger, and Slack.
Integrations with third party software are what will make your chatbots even more useful. Whether that’s with your customer relationship management (CRM) system, an appointment booking platform, or your payment system, integrations are what will allow your customer to engage with you without the need to talk to a human. These integrations rest on application programme interfaces (APIs), which act as software intermediaries facilitating two systems talking to each other.
What about digital assistants and smart speakers?
Digital assistants, such as Amazon’s Alexa, Google Assistant and Apple’s Siri, use the same NLP principles as chatbots. Except they introduce a number of other components specifically to deal with voice.
Automatic Speech Recognition (ASR), also known as “Speech-to-Text” (STT) is the technology used to transcribe a spoken utterance into a written one. Once this is done, intent and entities can be processed to retrieve the answer a user wants.
Text-to-Speech (TTS) is the final component of the process. It takes the written answer a chatbot would display and converts it into an audio which is then played by a user’s smart speaker.
In a nutshell, that’s most of the chatbot vocabulary. If you’d like to learn more, we’d be more than happy to talk you through it.
Our recent insights

FAIR data - what is it and why should you care?
One of our senior data consultants, Dr Alasdair Gray, explains what FAIR data is, who’s using it, why it’s so useful and some common misconceptions around it.
Do you know how to destroy your data securely?
In this final part of our data ethics series, we look at what data destruction is and how you can comply with GDPR required actions.
Discussions in data ethics: How to develop data ethics in local government
In the final part of the Discussions on Data Ethics series, Professor Paul Clough, TPXImpact and Lucy Knight, ODI, discuss data literacy, effective community involvement and the value of bad news.