Federico Evrard’s interview: Stikets success story
How does the Stikets chatbot work?
Stikets chatbot is prepared to answer any question related to the pre-sales and post-sales process of all Stikets products. For example, in post-sale processes: what are the return policies, what is the cancellation policy, warranties, products, everything related to questions that a user can ask after the purchase has been made and also questions related to the pre-sale process: how much will the shipment cost be, etc.
Basically, it is designed so that it can answer all kinds of questions that users may ask.
How was the configuration of this chatbot?
The configuration of GPT with Oct8ne is quite easy. First of all, there was a process of agglutinating all the information, which are the possible answers to questions related to pre-sales, post-sales, the products, etc.
All the answers have to be agglutinated, we have to condense them.
Once we have this information, it is basically a matter of working on the robot. In the case of Stikets, if you try it, you will see that it is 100% AI. When we say that, we mean 100% of the users will talk to artificial intelligence. We also have other chatbots where it is not 100% artificial intelligence, but artificial intelligence is only applied to small parts.
In the case of Stikets, I insist, it is completely AI. All the conversations are going to go to artificial intelligence, then, depending on this, you have to do an extra configuration in Oct8ne, or not. If you only work with artificial intelligence, then the advantage is that all the configuration will be done based on the understanding of the ChatGPT language, and the configuration, as I was saying, is quite simple.
There is an issue related to chatGPT that needs to be controlled, chatGPT sometimes invents things, Open AI calls them hallucinations. For us, it has always been fundamental, from the beginning, that this never happens in our bots. The chatbot, will never be wrong, never confirm something that is not true. How do we do that? Through configuration processes.
On the customer side, what did it take to do the configuration?
Actually, the configuration is straightforward, technically speaking. You don’t need any person with development knowledge. Just a person with basic knowledge in prompts, with a lot of willingness to try and with enthusiasm.
So, how did we do all this? Well, as I said. First, we asked Stikets for all the information, and together with them -because it has been a process that we have done together-, we have been nurturing that information and making prompts, which in the end is nothing more than giving the information and guidelines to GPT. Basically, it is to tell the bot how it has to act, how it has to attend, what tone of communication it has to use, use emoticons or not, which are the steps that require the intervention of an agent.
We have talked about many things, but the pillars of a GPT configuration are: behavioral patterns -who you are, what you do, how you do it, for example-, and there we include that you are the Stikets chatbot, that you attend customers with a friendly tone, that in case this or this happens, you transfer the chat to an agent.
To give you an idea, if it detects that a user wants to cancel an order, that’s a process that in Stikets is done with the intervention of a person, then, we use AI to tell us “Hey, Oct8ne, you need to pass this conversation to a person.” It’s not that we always set up the AI to respond or not, it’s that we set it up to respond and, if necessary, to refer to an agent.
What is the main benefit that this new technology brought to the Stikets chatbot?
Specifically Stikets before working with GPT and with us, worked with another solution with an artificial intelligence model not based on generative artificial intelligence. Those were older artificial intelligence models, so I do not know how their robot was before nor the results it gave, but from the feedback they gave me in terms of configuration, the current one is much more effective.
The previous model works with a series of intends and answers that are still templates, then, those models for those of you who do not know them, basically are models where you create a series of intends, a grid of a lot of questions, and you create a specific answer that is a template that is always the same for each category of question. Basically what you do is you feed it by asking the same question in twenty different ways, and you associate, to those twenty questions, the answer. What you do is to constantly feed it with variations of each type of question. I can ask you a question five different ways, so it’s a process where you basically feed one thing in and say “hey, this category of question – and you feed a lot of questions in – equals this answer”. So what difference does open AI give you, is that the answer is never the same.
In the case of previous artificial intelligence, some answers are templates. Open AI personalizes and there is never the same answer.
A funny thing about Open AI is that if you ask a question now, and you ask it again in two minutes, it may give you a different answer. It will give you the same answer, in essence, but different. That is to say, the sentence will be constructed differently.
Any conversion data or any interesting data to share?
In terms of conversion, I think Alessio Scotto mentioned something. More than conversion, -because conversion is something we all like to know, but it depends a lot on the vertical, it depends on the brand, it depends on the moment we analyze it, depends on many things-, I would like more to analyze something that surprised me a lot and that is the level of automation. This is a KPI that we like a lot in Oct8ne, which is what percentage of conversations is the bot absorbing and what percentage of conversations have required the intervention of a person.
In the case of Stikets -global-, without going into segmenting the type of conversation, the robot right now has a level of automation of 70%. Between 70% and 75% actually. What does that mean? Out of 100 conversations, 70/75 are self-managed and don’t require anybody’s intervention, because nobody has needed any more help.
Well, now if we analyze this and we put the focus on product questions, GPT as I told you, is trained to answer about all its products.
We have detected that only 7% require the intervention of an agent.
We are talking about 93% of the product conversations are automated, and the funny thing is that you would have to find a person who knows the products much better than the Stikets bot, because the training that has been given is really excellent.
And if we go backward, that 70% absorption globally means that all the other issues are not absorbing as much, but why, because Stikets is interested in certain situations of certain users always going to an agent: users who are dissatisfied, users who are experiencing a problem with the order that the robot can’t solve. The robot can’t generate a process, and can’t contact the carrier and ask them to reschedule the delivery.
The robot helps us to detect all those users and know that they need a process, so to say that the robot has not been effective there I think it is not fair but in the end what is brutal is that the robot helps until the point we want him to do the job. In fact, to add a small nuance that is very curious, it was a lot of fun to work on the transfer to agent -which is when the robot derives the conversation- and GPT gives a lot of play there.
What was done, and I invite you to try it, is that if I say I want to talk to an agent, the robot will first ask what I need, what doubt I have, and until I tell it the doubt the robot will not pass me. What’s more, GPT will try to solve it if it knows. It will analyze my doubt and try to answer me, even if I have not told it that I want to talk to an agent, it will try to answer me to slow down the handover and give the user a much faster response. This is basically configured in a very simple way, inside Oct8ne you tell GPT “if you detect that the user wants to talk to an agent, first, ask him what his question is. Once you know their question, try to solve it with the information you have. If you don’t have information to answer, refer them to an agent”. In other words, you can shape the bot however you want.
In summary, the Stikets robot answers almost 100% of the chatbot queries, because those that it does not answer are the ones that are configured by Stikets to be referred to an agent.
How do you see the world of chatbots with the emergence of artificial intelligence technology?
I think the change is evident. It is today, not tomorrow. It is true that we will gradually see how their use will increase. It is clear that chatbots with generative intelligence change a lot of things. In fact, it is not only generative intelligence but also, thanks to Oct8ne, it is possible to integrate AI into an order database, for example. We have not talked about this but we have done it in Stikets as well, and if you want to know the status of your order, we pass the order variables to the AI, then the AI will answer you what is the status of your purchase: “it turns out that the carrier has passed and was not at home”.
How does the AI answer this, because it knows all the delivery policies. It knows perfectly well what Stikets’ delivery policy is, and it knows that there are three delivery attempts, and that if the first one is unsuccessful, a second attempt will be made, and so on until the third one. That is to say that it will also answer questions about a client, and if you want it to store variables, it will store them. If I pass the variables of the name to GPT, I tell it “the client is called variable NAME and its email is variable EMAIL”, and so on with the variables that we would like. This helps to personalize the experience even more.
So, all this to say that it seems obvious to me that the future of chatbots goes through there, and what changes with what we have today, changes a lot in the user experience, and especially in the capabilities and the spectrum of conversations that you can manage with artificial intelligence. Above all also in the ease of being able to configure all this.
It is true that artificial intelligence has caused a lot of hype and now it is difficult to think of a chatbot without AI, but we saw this before artificial intelligence. There are times when it is better to guide the user and give him options, because otherwise we give him too much freedom. There are processes where the options are always interesting, and with this I link a little with what I was saying before: you can have a 100% AI robot or you can have a mixed robot. You can have a robot with artificial intelligence that, depending on the query, will pass you a tree of options. There are many cases where it is better to guide the user with options, for example a lead generation process or a very operational issue.
It all depends on the client’s needs. You have to take into account what value it is going to generate and make a decision in that sense.
Alessio tells us that “Every day we monitor the answers it gives, how it’s interacting with customers, etc. and we read the questions and answers and, depending on what we read, we make changes to the prompt, because sometimes customers ask questions that we hadn’t thought of.
But there are very funny cases in which the client hyper-personalizes the query and says for example “Hi, my daughter Valeria starts school tomorrow and I need stickers for her clothes, but I have no clue, can you recommend me?” and the bot answers “I’m so excited that your daughter starts school!”, and these phrases are not in the prompt, and these cases fascinate me a lot”.