Experimentation is not a thing that you do. It is how you do things[G1].
Today, most brands conduct experimentation at scale. How do they run them effectively? How do they build a culture of experimentation in their organization?
At Netcore Martech Mashup 3.0, Amit Bhatnagar, Chief Product Officer, Dineout, Nitish Bugalia, Head of Product and Strategy, MyTeam11, Vikas Vardhan Soni, AVP Products, Head Digital Works, and Deepak Krishnan, Senior Director of Product Management, Head of Growth, Myntra, share insightful views.
If you thought you could create a funnel, set it, and forget it, then you’ve caught the wrong end of the stick.
Maximizing your conversion funnel happens when you experiment with each element of customer interaction, dig deep into the analytics, and optimize your product design.
Brands that have successfully used experimentation to drive business results have shifted from myopic metrics like order conversion rates to more holistic measurements of customer success, like net promoter score (NPS) and customer lifetime value (CLV).
Building a culture of experimentation
Today, most brands conduct experimentation at scale. But how do they run them effectively? How do they build a culture of experimentation in their organization?
A lot goes into creating a culture of data and experimentation, primarily if an organization builds it from scratch. Let’s look at some significant milestones that companies need to cross:
● Democratize your data: The first step is to institutionalize and democratize data and not leave it in the hands of a few. If you’re not watchful, data and experimentation can become tools in the hands of the usual suspects: product managers, product analysts, decision scientists, and growth managers. As a result, people outside these core groups start seeing data and experimentation as weapons in the hands of a few who brandish it to object to an idea. Organizations need to build a culture where data is accessible, understood, and applicable by every stakeholder.
● Know the `why’ of experimentation: Everybody doesn’t need to know the `how,’ but they need to know the `why’ of experimentation and data. They need to understand its vocabulary. So make data and experimentation front and center of processes.
● Have a balanced approach: Experimentation may be king, but there are times when you need to challenge whether you’re doing it right. Experiments with surprise, befuddle, even ruffle feathers. That’s not a bad thing.
● Question your experimentation: Sometimes, there may not be sufficient data to compute. Sometimes the experiments aren’t done right. Sometimes the right metric is not chosen. Question your experiments and their results.
● Fail fast to learn fast: Have the ability to digest failures. It’s easy to claim you have stakeholder buy-in for your experimentation. But the game changes when the metrics show some random numbers and not the results you expected. Make your stakeholders aware that experiments can lead to dark spots.
● Set audacious goals: It will give you a glimpse into what the future could look like and what you’re aiming for. In the meantime, you will fail. But that’s all right.
The challenges of democratizing experimentation
Modern organizations want to democratize and empower their team members to have a framework, make choices, and unlock new learnings. After all, that’s the key to innovation. But once this starts happening, there are a new set of problems that come into play.
If organizations give every employee the power to choose and innovate, they’d have hundreds of people who would want to put forward a hypothesis. But there’s only so much bandwidth that a company has.
Democratizing experimentation brings forth a new set of challenges for enterprises:
– How do they pick the right kind of experiments?
– How do you prioritize them?
– What are the right sets of processes?
– How do you come up with your hypothesis?
– How do you pick the right metrics?
– How do you ensure there’s a proper balance in your portfolio?
People think that an experiment, or an A/B Test, is a simple case of splitting up traffic and inferring results. But many things can go wrong, and, as a consequence, organizations could end up making a lot of faulty inferences that could set them down the wrong path.
When enterprises talk about an experimentation culture, it’s not only important to talk about why they do it. There are other questions to address as well:
– How can they conduct experiments in the right way?
– Do they have adequate knowledge to make meaningful inferences?
If the answer to any of these is non-affirmative, enterprises could end up making bad decisions and moving in the wrong direction.
Size matters in experimentation
Fair or unfair, large companies have a definite advantage when it comes to data and experimentation. How?
– They have the right tech infrastructure.
– They have the right mindset. Nobody needs to be convinced to dig into the power of data.
– They have a bigger data pool to form a solid hypothesis, to begin with.
– They have a larger set of users.
– They can run multiple experiments at the same time.
– They can expose only a fraction of their users – even 0.5% or 0.3% – and still achieve statistically significant results because their 0.3% translates into a huge user base.
Startups are a different ball game. For them, experimentation is not a solved problem. Among other things, they need to understand why they need to experiment and build an appetite for failure.
On the B2B side, the startup user base is often a few hundred, maybe a couple of thousands at the most. Even this user set is not homogeneous, as their customers are diversified in geography and size. So it’s difficult for them to be completely data and experimentation-driven as one test base cannot represent the whole.
The key to experimentation success for startups is to be partly data-driven and supplement it with intelligent guesstimates. Even that is a huge leap over pure intuition-driven decision-making.
Experimentation takeaways for Indian enterprises
As Indian companies board the experimentation bandwagon, the learnings are starting to percolate across industries. What are the challenges they need to address as they make experimentation a part of company culture?
● Organizations have different business and product groups, and each will try to run its experiments. Build a common experimentation council with a generalized framework to evaluate all incoming experiments, infer them, and make the right decisions to create value for customers.
● Always pick the right metrics. There are different types of metrics, and each has its statistical techniques to conclude inferences. If you don’t understand the nature of metrics, you might not have the right sample size. You might not power your experiment correctly. You might not choose the right detectable effects to run experiments. Therefore, you might conclude an experiment prematurely and call it a success. If you’re going down the path of experimentation, spend time understanding the nature of metrics and statistics.
● Before running an experiment, do a scientific check to understand how the logic behind instrumentation is functioning because that gives you the final output. If the instrumentation logic is weak, you might end up having false-positive results.
● Understand errors and biases. It’s challenging to know when a bias creeps into your experiment. Also, there may be situations where the experiments you run have effects that are non-causal to behavior. For example, if you launch a shiny new button, you will see a lot of incremental spikes in the first two to three days. After that, it flattens out. If you don’t know that there is something called a primacy effect, you will not factor it as part of your inference, and you will not exclude those biases.
● Don’t pick multiple metrics when you run experiments. The odds of finding one false positive metric increase dramatically.
● Don’t hunt for metrics after completing an experiment. It is something business leaders commonly do. For example, they like to wonder what if a metric went up while running an experiment. However, this was not part of your original hypothesis. If you’re trying to hunt for metrics post an experiment, you’re validating a bias you already had.
We hope you found this panel discussion insightful. Check out the other sessions from Martech Mashup 3.0 here.