Error handling: tips to make your chatbot more helpful, natural and persuasive

Sometimes things do not go your way. This is the case in life, but it is also true when interacting with chatbots and voice assistants. Conversational AI often does not go the way we want it to. It leads to disappointing experiences and ultimately a rejection of the experience.

Luckily, there are ways to design conversational experiences in a way that makes chatbots and voice assistants more helpful, natural and persuasive. An important element of designing better experiences and unlocking the potential of conversational AI, is through error handling.

Conversations are messy. We misunderstand each other or our thoughts wander off and we did not listen to the question that we were asked. Humans have developed amazing techniques to deal with such situations. We ask for clarification, or we share more details to get the other person to engage with us.

If we translate this to conversation design, we also can say that we have techniques for when there is a no-input and when there is a no-match. Let’s discuss both error handling types and discuss some examples to explain these concepts.

No input

Sometimes you ask a question and the other person doesn't respond. Imagine walking into a hotel lobby and having the following conversation:


RobotHi, how can I help you?


RobotHi, do you have a reservation or are you here for the restaurant?


RobotDo you want a hotel room, eat at our restaurant or are you here to meet someone? Just let me know how I can help.

MensI'd like to book a room

Because we do not engage with the question that we're being asked, the receptionist adds more information to the question in the next turn. It tries to feed us more information to increase the chance of us engaging with her. In conversation design, we call this escalation detail – or prompt verbosity.

For every node, in a perfect world, you want to write 3 error handling messages for a no-input. You want to add more information to the question to help your user engage with you.

No match

There are also situations where the user says something that you simply don't understand. Maybe you understand the utterance in and of itself, but it’s very much out of context and therefore doesn't make sense for the conversation that you are engaging in…

What do you do in such situations?

You ask for clarification or you try and help the other person with producing an utterance that is easier for you to grasp. Again, you add a little more information each time there is a no-match between what your user says and what you're trained to understand.

Let’s look at another hotel lobby example.


RobotHi, do you have a reservation?

MensThe weather is amazing today.

RobotCan you please let me know if you have made a reservation for the restaurant?

MensI just came back shopping.

RobotOur restaurant is fully booked. So if you want to eat with us today, you need to have a reservation. Do you have one?

MensI have. It's under the name Smith, I believe.

Sure, this example is pushing it a little bit but you get the point. We take a few shots at trying to get some cooperation from our users. If it works within 3 tries, then that is brilliant. If it doesn't, well then we know that we've tried our best.

What is important to understand though, is that we can use error handling messages to keep users aligned with our conversations. Often, we give up too quickly.

Try harder

We ask a question and if we don't get the input we can understand, then we often try again with a generic message, and if that fails as well… then that's usually the end of it. That is a mistake. You are giving up too soon on too many people. You wouldn't give up so quickly in regular conversations.

So cut your users some slack and try a little harder. Just write 3 error handling messages for no-input and no-match for every prompt. You'll see more completed dialogues and users will have a more natural experience. It might seem like a bit more work upfront, but it's definitely worth the effort.

Trust me. In bots we trust.