Intelligent send time: how Netcore Cloud developed the Send Time Optimization feature
Written by
Tejas Pitkar
Tejas Pitkar

Subscribe for updates

Intelligent send time: how Netcore Cloud developed the Send Time Optimization feature

Published : September 17, 2021

This is the second post in a series on Netcore’s latest send time optimization model. Read part 1 to learn what STO is and how it can impact your email marketing results.

Wouldn’t it be great if everything in life happens perfectly on time?

For instance, I would like to go to the supermarket when it’s less crowded. I can avoid the long queues and aisle rush, quickly shop, and make the experience productive. It would be a perfect time.

Sending marketing emails is similar too!

Marketers are always looking to send emails at a time that’s perfect for customers to read them.

Is it daytime or nighttime? Weekdays or weekends?

Send Time Optimization (STO) holds the answers to such questions and can help increase customer engagement on your emails.

It takes the guesswork out of knowing the perfect time to send emails to each customer.

This post provides insights into how Netcore’s data science and product engineers developed the STO technology to benefit global brands and millions of customers.

How our Product team developed the customized Send Time algorithm

Since the origin of the email channel, the question of what is the best time to send emails has plagued marketers. According to Statista, 66% of marketers believe that artificial intelligence should be used to optimize email send times.

Send time so far has been a challenging technology for ESPs to automate but not for a comprehensive customer engagement and experience platform such as Netcore Cloud.

Our product team solved this problem with an AI-powered solution that predicts the right time to connect with every customer, every time.

Two types of approaches were initially used:

First was a ‘supervised classification framework’ – an AI/ML-based algorithm that predicted the right time to target a customer, based on subject lines and their interaction.

Supervised Classification framework

In this method, a 24-hour window was broken down into 8 slots of 3 hours each, and customer opens and clicks in each slot were captured as model variables. In addition, other textual features from the subject line were used as additional variables, and an ‘XGBoost’ (eXtreme gradient boosting) model was learned by our AI engine, ‘Raman.’

Now you must be wondering what is XGBoost and why did our product team choose that one in particular?

XGBoost is a tree-based boosting AI data model, which is fast and achieves similar accuracy to neural networks. Our product team earlier experimented with other AI algorithmic models like random forest, logistic regression, but XGBoost came out as a clear winner in memory efficiency, speed, and performance.

Prediction for each customer using test subject lines was done based on the time slot that had the highest probability of open, and customer engagement.

Ideally, we would want each user to prefer one of the eight-time slots in a higher probability. But unfortunately, this was not observed. All the eight slots had similar to closely predicted probabilities, which made deciding difficult for a unique customer time slot.

The team realized that the model was not working correctly with such a simulation strategy.

The mediocre performance of the classification model

Beta-testing result STO Classification model

Post-beta-testing with 5 senders as seen above, we observed that only 3 of them were able to improve their open rates. The open rate improvement was just around 8.5% compared to the legacy rate.

Eventually, the team discarded this method as it was complex, time-consuming, and did not yield great results.

The second approach was ‘unsupervised clustering’ to segment customers based on the customer engagement, and interaction data.

Unsupervised Clustering framework

In this approach, a 24-hour window was broken down into 8 slots of 3 hours each, and customer opens in each slot were captured as model variables. Using the mini-batch ‘K-Means’ method, users got clustered into appropriate groups that were determined statistically.

K-means is an unsupervised ML algorithm to group similar customers based on open/click behavior. Mini-batch K-means is a more efficient and faster implementation of the same.

A ‘mini-batch K-means’ is an advanced approach to the ‘K-means’ algorithm that uses small random batches of fixed sample size data to store them in memory. Each new iteration of a dataset is obtained and used to update the clusters. The process is repeated until convergence. Our product team decided to use this machine-learning algorithm to save computational time. 

For each cluster, the ‘mode’ time slot was computed and determined as the preferred slot for users in that particular cluster. When we tested this model with 5 clients, we saw that the open rate lifts were positive for 4 clients, the highest being 11%.

Beta-testing results STO Classification modal

For version 1.0 of our STO feature, our data science and product teams chose this method, as it outperformed the classification method.

But soon, our product engineers realized that the clustering model too required improvement.

The time of the clustering model 2.0 came earlier than expected

One of the major problems was that the ‘Clustering Framework’ did not necessarily provide 8 different clusters for eight different time slots. Users with an affinity towards certain time slots were being clubbed with other users showing affinity to popular time slots.

For instance: 6-9 pm is a relatively popular slot for checking emails when people are commuting back home from work or are in their homes. Since this slot had a significantly large number of users, the smaller group of people, who were late checkers between 9-12 pm, were being clubbed with the users in the same cluster.

When the cluster mode was calculated, these small 9-12 pm group of users got assigned to the larger, preferred group of 6-9 pm slot. 

So our version 1.0 model, though successful, needed significant improvements.

Our product geeks got busy working on the next version to make it more accurate at predicting the right delivery time.

Multi-armed bandit model

The term ‘Multi-armed bandit’ comes from a hypothetical experiment where a person must choose between multiple actions, each with an unknown outcome. The goal is to determine the most profitable outcome through a series of choices.

What started as an experiment to better the previous version turned into a full-blown major project. Little did our product engineers know, it will become the current model (version 2.0) of STO!

MAB (Multi-Armed Bandits) are a class of AI models, which help select the right decay model at a client level to get the best STO performance. It continuously monitors the performance achieved through multiple decay models and picks the best that will work for the client.

Now you must be wondering, what’s a decay method?

The MAB model gives more weightage to recent user interactions to impact the predictions from the model.
For instance: If a user has opened or clicked a few emails historically, the MAB model will learn from recent opens and clicks and decide on a preferred slot. The further the model goes back in time, the less relevant the interactions are considered. This is the decay method used for computing the preferred time slot for the user.

The precursor of the client enablement is at least 90 days of historical data for the learning process. Each user for the respective client is included in the prediction process only when the opens have reached a certain threshold in the hour buckets. The end prediction for each user is given on a weekday and weekend basis.

The multi-armed bandit model helps select the right decay method to find the preferred slot for any user. The other 2 models are based on the No decay and Sent/open(click) rate in the historical period according to propensity. 

Perfecting the send-time using the multi-armed bandit model

When the initial STO enablement is underway for a brand, the preferred time to reach the users is available from all 8 models. So, we have 8 different preferred time slots.

At the time of enablement, the predictions for the 8 models are validated based on the last 15 days of historical data. The model with the highest accuracy is used for the final prediction of the week. This is the out-sample data.

Every week, the models are evaluated based on out-sample data, and the model performance is stored. The model with the highest accuracy is used to predict the delivery time for the upcoming week.

This way of storing the out-sample accuracies and calculating the send time is continued for the next 5 weeks. After that, the Multi-Armed Bandit layer selects the right model every 5 weeks. If there’s no clear winner among the models, the process continues till we get one.

Beta-testing results STO clustering model

As seen above, the open rate uplift of more than 20% was observed with version 2.0 during beta-testing for our clients. That prompted us to go live with the model and allow the rest of our customers to use them.

List of Campaigns

STO reports on the Customer Engagement platform of Netcore Cloud.

Within weeks we were celebrating our first STO success story.

Nykaa fashion – a popular E-commerce fashion brand in India, used our STO feature 

to learn the best time of the day to maximize customer engagement. Netcore sent their campaigns at optimal times and observed a stunning 36% increase in open rates.
Here’s their story.

Future plans of our product geeks

Despite the success of the version 2.0 model, our product team does not rest on their laurels. They have now proposed a new version of STO and will be working on it to achieve 100% accuracy.

The concept being explored: if a user is active across multiple channels like – App Push Notifications, Web Push Notification, and Email, we can use their channel-agnostic data to find the best time to engage with email.

The proposed implementation of STO is divided into 2 parts, i.e., Exploration and Prediction. Any new user will with the exploration phase, wherein the send activity will be done with different time slots.

Our product team is moving away from finding the best method to predict STO slots at a brand level. They are looking forward to creating a model that treats every customer individually and finds the best possible STO slots at a customer level.

Our dedication to perfecting the send times for our customers continues to grow.

Debapriya Das
AVP, Machine Learning
Netcore Cloud

“Post releasing our version 2.0 of the STO model, we have seen amazing results from our global customers. Our AI-engine Raman does the heavy-lifting of email delivery and lets brand marketers handle the big picture of their email programs.
STO feature is a major competitive advantage for the global brands that use it on Netcore’s email platform. We are excited to work on an advanced STO model in the product pipeline that will make send times even more accurate and get marketers a step closer to their KPIs.“


Through this post, you learned about the meticulous nature of our product engineers. They are a passionate bunch and take immense pride in developing advanced technologies at Netcore.

Throughout our testing of the STO model (version 2.0) with global brands we have seen their overall customer engagement increase by 15% on average. These brands allowed Netcore to automate its email delivery. Our AI engine took care to send their critical emails at the optimized times.

Meanwhile, the marketers focused on the big picture of creating efficient email strategies and programs. With Apple’s privacy changes starting in September, the STO approach will have to change in terms of the metrics being measured.

Still, STO remains a major competitive advantage for leading companies like the ones featured in this blog. If you would like to join them, now is your ideal time to book a demo.

Get in touch with our experts to know more!

Unlock unmatched customer experiences,
get started now
Let us show you what's possible with Netcore.