How many chatbots are actually successful?

Lore Simons
10 min readApr 2, 2021

Well, I am still looking for numbers, so if anyone has some credible resources, do share! Until now, I haven’t found any, at least none that really answer the question. So in this post, let me lay it out for you.

We’ll discuss:

  • The definition of ‘success’
  • The most forgotten component in Conversational AI: the user themselves.
  • You and your project team, jumping in head first
  • Conversation-Driven Development
  • Your chance to develop a chatbot that sticks!

Define success

The info that I am looking for is actually objective data that provide concrete evidence about the ROI. So not those testimonials or ‘demo cases’ posted on companies’ websites, claiming they have developed a great chatbot customized to the client’s case in no time. Ok, they might be good for covering the main best practices about chatbots (like the ones you can find on google when searching for ‘how to make a good chatbot’), but they’re very theoretical. You never hear about cases where it didn’t all go as planned (because, it won’t), the total time spent (which is probably double of what was estimated), hence final project cost and especially the effect or impact the chatbot had. Typically, only the successful cases are discussed (which is so unfortunate), with success often referring to the ‘success of implementing technology’. And yes, technology to create a chatbot has seen some great progress.

From chatbots to Conversational AI

More and more chatbots are actually using some form of AI. More than a year and half ago I published a blog post about chatbots, where I clearly crossed-out the AI part when discussing the state of technology. Reading it now, I almost feel silly, but at the same time it offers perspective and shows the evolution technology has made. Heck, it even made me not like the word “chatbots” anymore — instead I prefer the term Conversational AI now, because actually.. we are getting there (or let’s say, somewhere): due to the enormous progress in the field of NLP (Transformers, pre-trained models…), more and more frameworks are actually leveraging AI to build Conversational assistants.

Frameworks have made it to “level 3”: building contextual assistants (check out this blog if you would like more info about the different levels in Conversational AI. Rasa is on top of its game and by far the most convincing framework if you ask me).

We went from extremely closed scenarios, with only buttons to click or a fixed designed flow, to a scenario in which the topic is probably still fixed, but the way (being, the flow) a user gets their answer is fully up to them. And who knows what’s coming next. The GPT-3 model that got released by OpenAI, which is very impressive from a technical point of view, looks very promising and has raised expectations for the coming years. Next to this, you have the low-code or no-code alternatives to build a chatbot that started popping up everywhere, impossible to keep track of. All of these focus on enabling customers to create a chatbot without the need of technical skills. But also the development frameworks, that allow you to build a chatbot from scratch, offer more “built-in skills” out of the box (i.e. connections with other existing APIs). Conversation Design is getting more of the attention that it actually deserves. All this combined makes Conversational AI more natural and improves the chance of implementation.

It all makes me excited and disappointed at the same time. Because despite the many advancements in Conversational AI, and being around for approx. 5 years now, we still make the same mistakes. You still read about the same old ‘best practices’ that were already best practices 2 years ago. Today, it isn’t about ‘being able to (technically) create and implement chatbots’. When it comes to Conversational AI, or actually any form of AI, it is about finding a use case where this technology actually adds value to the end users’ experience. The question shouldn’t be: “can we build it?” but “should we build it?” or “is it useful to develop this kind of solution?”

“It is your end user who will make or break your chatbot.”

So what I mean when I ask “How many chatbots are actually successful?” is actually “How many chatbots actually resulted in added value for its users?”. And the only ones able to answer this question are your end users themselves. You know, the ones you are building the chatbot for. We don’t go into conversation with our end users when developing a chatbot. Pretty odd considering we are building ‘Conversational AI’.

The problem

It all starts with the actual problem. If there is any..! Still too often I get amazed by the fact that companies start from the technology, but don’t know what the problem is they want to solve. I call this “AI for prestige”, or in this case using Conversational AI just to… use Conversational AI. Because I can tell you right now, it will result in a demo case at the next board meeting at best, justifying Marketing to proclaim the company has “successfully implemented Conversational AI” and then, after a month or 2, it will end up in the closet, on the pile of “previous innovation projects”. Again, it’s not about the technology. It’s about making it stick!

But OK, let’s assume decision makers actually identified a clear problem or need that could potentially be solved by Conversational AI. The next question you should find an answer to is: does this problem also exist for your end users? Double checking does no harm! And while you’re at it, you should also check “if it is possible to chat about it?” Because that’s pretty much a prerequisite when it comes to Conversational AI. And last but not least: do they want to ‘chat’ about it (with a chatbot)? Certainly because ‘improving customer experience’ is one of the classic benefits predicted by business, so you better make sure it is. Chatbots have burned some bridges in the past, don’t forget about this!

Jumping in, head first!

But it’s not all your responsibility. I am still waiting for a solution provider that challenges a project team and the decision makers to find out whether working together is actually worth both parties’ time. And by this I mean, questioning the actual need to build Conversational AI. Teaching them about Conversational AI: the technology, the labelled data part, the human factor that is unpredictable when it comes to automating conversations… Giving a super enthusiastic project team a heads-up and getting them to realize: building successful Conversational AI is just. hard. work.

Unfortunately, this is how it typically goes: Most solution providers will take you through a Discovery workshop where you try to define the scope, give your Conversational AI some personality and design the necessary conversation text and flows. This info gets summarized and the development team kicks off, starting a… “never-ending-journey”. Ok, sounds a bit too harsh, but actually… it is! A Conversational AI project is a story that never ends (unless you actively stop it). And I at least hope solution providers have gotten you up-to-speed with this fact. But first things first! Let me break some things down for you about what’s lacking in today’s approach.

Let the experts talk

The Discovery workshop is a good thing! And ideally, “understanding conversational AI/ML” is the first thing that gets touched in the workshop. This is definitely the responsibility of the Conversational AI expert or solution provider. And if it all sounds too much like sunshine and flowers, it should ring some alarm bells, triggering you to ask some simple questions like 🕵: What is the success rate? How many chatbots are in production today? Can you show me one (live-demo)? How long did it take to get them into production? Are they being used? And how many conversations ended up helping the end user? What happened with the ones that didn’t find help? It will be beneficial for both parties to get some straight answers to these questions, so the full team is up to speed with the “dos and don’ts” of Conversational AI.

Other experts that should be in the room - but are too often forgotten - are the Marketing/Sales and Customer Support representative! Wouldn’t it be a pity, developing a great solution but putting it on a page that never gets any visits? Or why would you not involve the ones that will be on the front line when your Conversational AI solution fails? Because it will. Conversational AI today is not strong enough to automate your full first line Customer Support service. Some users will probably even call a support agent the second they know the chat service is replaced by a chatbot (like mentioned before, something with “chatbots, burning and bridges”)! Customer Support agents not only have the knowledge to create and maintain your chatbot’s Knowledge Base (read ‘brain’🧠). They know your end users best, as they come into close contact with them every single day.

Conversation-Driven Development, from day 0!

Scope definition, conversational design and personality, ️check? Ready to develop, you think? Well, hold your horses! Please 🙏🏼, I hope you went further than ‘Google Analytics’ or ‘Top support issues’ to define the scope of your chatbot.

It’s like giving a speeding ticket to a car based on its location, instead of looking at its actual speed. Meaning: totally beside the point and based on assumptions!

In the ideal world, you have actual chat conversations to build your case on. If not, go and get some! It’s an approach that works best in my experience and unfortunately until now, there is only one Conversational AI framework who’s on the same path with me: Rasa. They like to refer to this approach by calling it Conversation-Driven Development (CDD), and actually, I couldn’t think of a better name for it!

Shift towards Conversation Driven Development and get your Conversational AI out there, asap! Something at Sticky Lemon we take very seriously.

When designing the conversational text and flow from scratch (or based on relevant data), you are making lots of assumptions. You will constantly be “pretending to think like a user” and “predicting what they will ask and how”. You can consider it being very creative, but actually you are… trying to predict the unpredictable! Go into conversation with your end users and collect data and facts to base on. Putting up a survey isn’t sufficient. The best way to get valuable feedback is by confronting them with the actual idea/solution in the circumstance you assume they will use it. Do not create an IT support bot and test it with volunteers, when in some cases, if you actually need IT Support, it is rather urgent. So the emotional state of the user will be totally different. The feedback you will get here is not how it will be when your bot is in production. You should drive your development by actual conversations in circumstances as close as possible to how it will be in production.

Conversational AI — ‘A story that never ends’

Would be a great headliner, no? Developing a chatbot that fits in with business expectations and getting it into production is one thing (although the increase in low-code, no-code tools definitely helped), but the maintenance that goes hand in hand with Conversational AI is highly underestimated. Recently I talked to a company who told me how they tried creating a chatbot to inform their members about COVID & the government’s rules and decisions. Getting it up and running was not that hard, but each week the decisions changed (in our country at least 😊) and it was impossible to adapt the bot’s content at the same speed. Something the project team definitely didn’t think about, leading them to end the project after 3 months, with the following conclusion: ‘it would have been more efficient if we offered a live-chat service on our website”… If you created a Conversational AI solution that is actually getting used, you better think of structured processes to keep it like that. You should appoint a responsible ‘Bot Trainer’ for it and make sure his/her life will not become a living hell!

Conclusion

Conversational AI. It’s not a question anymore about building it technically. It’s a matter of finding a legit use case that actually brings added value to the table, and does it in the most efficient way. You should realize by now that when you are thinking about Conversational AI, you are making a long list of assumptions.

What if I tell you right now that there is a way to get all these assumptions cleared out, without even writing 1 line of code? Not only that, you’ll get immediate feedback from the ones that will determine your success, as well as a chance to check if your Conversational AI is designed well and sufficiently trained. In other words: I am offering a door for you to peek into the future and see for yourself how your Conversational AI solution will be used, and most importantly, perceived by your end users. Whether your formulation is clear for them, the flow feels natural, your AI model is sufficient, and if it isn’t, you can capture data about what should be improved. A structured framework that will not only prevent you from burning money on something that will never work, but to keep developers motivated and increase the chance of success (in my definition of the word). If you ask me… an offer you can’t refuse!

Sticky Lemon

www.sticky-lemon.com
www.sticky-lemon.com

With Sticky Lemon, we take Conversation Driven Development verys seriously! Keep an eye on our website, as we will soon launch Sticky Chat, our Conversational AI training tool, focused on continuous improvemening your chatbot, in production.

In case you are curious → go and check out our website, follow us on LinkedIn or contact us for more information at hello@sticky-lemon.com

--

--