A/B Testing is the best way to raise your conversion rate reliably.
The idea is simple - split your traffic between 2 different versions of your page and track how many people from each variant convert.
It sounds simple, and it is - it’s even easier with Optibase - check out our channel for more guides and tutorials.
However, there are a lot of things you can do which would ruin your A/B tests - they’re not hard to avoid, but if you don’t know about them, you can ruin the integrity of your A/B tests.
I’ve been running A/B tests since 2018 on sites ranging from handymen, all the way to multi-million dollar SaaS businesses. In that time, I’ve made some mistakes - by following the tips I’m about to share with you, you’re essentially absorbing the 7 years of experience which I’ve gained through trial and error - all in one short video.
Anyways, let’s get into it.
.jpg)
You’re excited, I get it - you want to test your hero section, your pricing section, and your newsletter signup. I love the enthusiasm!
However, an A/B test tracks the conversion rate when that variant is showing up - and if you have multiple tests running at the same time, your data is inaccurate.
Let’s say you have a hero test running, and also a CTA test.
At random, users will see a hero variant, and a CTA variant - but what if your hero 1 variant lines up with your CTA 2 variant most of the time?
You may track that Hero 1 is doing better, but the only reason for that is CTA 2 - leading you to choose a ‘winner, even though it isn’t the better variant.
So, my advice - don’t ever run multiple A/B tests on the same page, at the same time.
You can however, run a multivariate test - which will find the best combination of variants among the various A/B tests. This way, your tracking maintains it’s integrity.
What conversion are you tracking?
More than likely, it’s a click on the get started/book a demo button. From what I’ve seen, this is probably the most common conversion to track, and I even use it all the time.
So - what’s the problem?
Let’s put it this way. One great A/B test to run is removing fields from your demo/signup form. Often, it can lead to a significantly higher signup form conversion… but, there’s also the chance that less of those signups will become customers - and if you’re not tracking the full journey, you wouldn’t know any better. This can lead to you calling a winner on an A/B test which is actually going to lose you money.
So, what’s the solution?
Ideally, track the full journey - as in, make your conversion programmatic and only fire when someone actually pays. However, I know that often isn’t possible - and if not, just make sure you keep an eye on your purchase metrics and make sure that the A/B test metrics seem to line up with your real bottom line.
It’s easy to get excited when you see one variant wildly outperforming another one - but, there’s such thing as luck. Sometimes, one variant can appear to be the clear winner after 1000 views, but then after 5000, it’s a completely different story.
You probably already knew this one - but, consider this my reminder to you to be patient. Don’t get excited right away - especially for more important tests on your site, be patient and wait as long as you possibly can to ensure that the winner is indeed the better variant.
When you run an A/B test and set the traffic split to, let’s say, 50/50 - it’s going to show half of your visitors variant A, and half will see variant B. Sounds great - and, it will show people at random - but, sometimes random doesn’t lead to a fair split, especially if the sample size is small.
Let’s say you sell insurance in the USA - you’re gonna get visitors from other countries, even if those people can’t convert.
What if, by the luck of the draw, variant 2 is shown mainly to people in Canada?
Even if this variant is better, you won’t know, since all the conversions came from variant 1.
To fix this, set restrictions on your A/B tests to make sure that only people who CAN convert are being tracked. For example, with Optibase, you can restrict by country, and even by state.
Especially with a smaller sample size, every conversion makes a difference in your final results.
Let’s face it - we all love looking at our own websites, and we have to test if the A/B test is indeed working. So, we’ll pop open an incognito window or even look on a new device to confirm that it’s working.
I’m definitely guilty of this one - and if you’re testing on 10,000 visitors, it probably does not matter. However, if you want to maintain your data integrity, make sure you preview it in a way that doesn’t track you.
The way you do this depends on which A/B testing tool you’re using - with Optibase, it’s as easy as creating a preview link and sending that to your team members.
At the end of the day, your data integrity is EVERYTHING with your A/B tests - and if you lose the integrity, your tests are as good as worthless.
This is by no means an exhaustive list, but if you make sure these bases are covered - you’re far more likely to be running accurate, beneficial A/B tests for your company.
A/B testing isn’t complicated, but keeping your data clean takes discipline. If you avoid these common mistakes, you’re already ahead of most teams running experiments today. Protect the integrity of your tests, take your time, and measure what actually matters. The rest becomes a repeatable process.
And if you want to run cleaner, more reliable tests in Webflow without the usual technical headaches, Optibase makes it easy to do things the right way.