New Year’s Eve 2017 Chatbot

Oliver Cook, Business Analyst, Digital Team, Greater London Authority
This a fuller version of his
post at City Hall’s official website.

At City Hall, we are always looking at new ways to engage with Londoners to help them find the information they need or to let them know what’s happening in their city. It was in this spirit that the Digital Team under Natalie Taylor decided to explore the potential for a chatbot.

Our aim was simply to go where people are and to make it easier for them to contact us. We also wanted to engage younger users of our service.

The idea

We wanted to get a prototype up and running quickly to allow us to get real feedback, instead of asking hypothetical questions. Bots aren’t that prevalent at the moment so that could have been a confusing exercise!

Fortunately for us, our friends at Transport for London (TfL) had already built the TfL TravelBot, and kindly offered to help us build a prototype. With their help we built a working prototype in just one day that could respond to basic questions and signpost users to content on the London.gov.uk website.

We chose Facebook messenger because it’s one of the most popular instant messaging services in the UK, especially with young people. It also has a really easy to use bot platform and analytics capability. We used Dialogflow to catalog all the bot responses and intents because that’s what TfL were using, but it turned out to be really intuitive while being flexible enough to update responses in real time.

The experiment

Now we had our prototype. The next step was to get people using it. But to do that we needed to build the prototype around one clear topic — no running before we could walk!

Given that we were looking to use the bot to improve our customer service experience, we decided to build it around the FAQs for the London New Year’s Eve Fireworks 2017 event.

We chose this topic for four main reasons.

  1. It’s the biggest regular event for City Hall and generates the largest amount of contact to our Public Liaison Unit and the London.gov.uk website
  2. We had last year’s New Year’s Eve Fireworks FAQs to pull content from, and an idea of the kind of things people want to know. This also meant we have a framework for the type of conversation, which made it easier to put into Dialogflow
  3. We know that most people find FAQs too static and don’t provide a great user experience, which provided a good opportunity to compare this with a bot
  4. We already had many different ways that people could contact us about the event so if the bot couldn’t answer their question we could point them to somewhere else

This meant that the bot aligned well with the objectives we had set out in discovery; that it would be relevant and create a valuable service, and that it would be testable by generating useful data.

Putting all the questions and responses into Dialogflow was the easy, if not most tedious, part of the journey. The next step was to get people using the bot so that we could see what questions people asked, and the way they asked them.

As this was a very rough prototype we tested the bot internally with colleagues and asked them to break it — which they did, almost immediately. This first round of testing was really valuable and taught us a lot about how people might use this type of bot, particularly in the absence of a structured menu to guide their journey.

Initially their questions tended to be very open and in no particular order. They also went well beyond the information that the FAQs covered, which had been the basis for the answers the bot was giving. This was very useful in demonstrating the expectations people have when using a conversational format.

Over the next three to four weeks we kept testing the bot internally and added the language, words, and phrases that people used — many we hadn’t thought of or even considered. We tracked if there were any other FAQ requests to the bot that we may not have thought about and added them in based on the conversations.

Feeling pretty confident that we had a really robust prototype after some rigorous internal testing, we went for a public beta a few days out from the New Years event and immediately saw that our confidence had been a little misplaced.

Despite opting for a very soft launch, take up was much quicker than we had expected. People started using it as soon as it went live.This was great, but it also destroyed many of our assumptions about how people would use the bot and the questions they would ask.

What we learnt

It was obvious that users made assumptions about what questions the bot could answer, and how much information it could handle. Users were asking questions with enormous amounts of contextual information that it couldn’t understand.

We had also predicted that most people would want to know if they could buy a ticket, as it was the most common question we had previously received via email and telephone. This also proved incorrect as, for the first three days, the most asked question was how to transfer their ticket to another person. We hadn’t even prepared a response to this question before we’d gone live. Interestingly the main question being asked by users continued to change as we got closer to the event, showing that the usefulness for the bot changed over time.

This was all part of the steep learning curve and we continued updating the questions and responses as more and more people used the bot. We also realised, pretty quickly, that our welcome message wasn’t clear enough so we amended it to ensure users understood that it was an automated bot and they needed to ask it clear, simple questions on four key areas of the event.

Original welcome message
Welcome message with the menu

While we knew it was important to make it clear that users weren’t speaking to a human, we had been concerned that caveating it too much might put people off. This reflected our own fear that no one would use it, though this proved to be unfounded. Changing the welcome message saw a steady increase in usage and improved the response rate significantly. Combining it with a really simple menu helped structure the user journey and the response rates continued to improve, with far more people experiencing a 100 percent response.

These learnings justified our decision to run a public beta as it meant we could continue to improve the bot over time and provide a much better customer service offering by the time it was being used the most — on the day of the event. We also sent a survey after each interaction to ask about the experience of using the bot and how people felt it compared with more traditional formats like web search, email, FAQs and other types of social media (eg Twitter).

Feedback

With around 350 people using the bot it didn’t exactly go viral, but it did beat our expectations and provided us with a sizable user pool to gauge how valuable this type of bot could be. We also managed to hit our target age demographic and noticed that the gender split reach was far closer than on our other social media channels.

The survey results were promising with nearly 10 percent of users responding. Well over 50 per cent felt the bot had either partially or mostly answered their question, and a similar percentage felt that they were able to get their answer far quicker than our other contact channels. It was also encouraging to see that the bot beat our website search and FAQs, and was users preferred service for contacting City Hall about the New Year’s Eve Fireworks event.

Our experience with the public beta in connection with feedback from users has given us confidence that a chatbot is worth pursuing further as a way of improving our customer service offer and, by extension, our engagement with Londoners.

The next step is more user research to find out what type of bot people would like us to have and how they would like to use it. The journey continues.

--

--

Chief Digital Officer for London

@LDN_CDO & Data for London Board @MayorofLondon using data to support a fairer, safer and greener city for everyone​