Blogs

When To Stop A Conversion Rate Optimization A/B Test

By Greg Ahern posted 05-10-2017 03:00 PM

  
One of the most confusing concepts to explain to a client is how you know a conversion rate optimization A/B test has finished and is accurate. Diving into statistics causes a glazing of the eyes and a head bobbing sleepy effect. Many A/B tools provide confirmation if the test is accurate but there are many variables that can cause false positives.   Here are some rules to follow so that you do not read your data incorrectly:

  • Always test for a full week or more to get users from different time periods during a week. A Monday morning user may have a different conversion rate than a Saturday user. Some tests can run over a period of months.

  • If you can graph your cumulative conversion rate over time, make sure there is a constant space between the winning variation and your control. If the lines are crisscrossing, then the test has not reached statistical confidence.

  • Note that in the beginning of a test the results may seem large. For example: 1 conversion from 3 visitors is a 33% conversion rate, but in a week or two the actual result is more like 10 conversions from 300 visitors… a 3% conversion rate. Make sure you are not concluding a test with a very small number of conversions where adding one more conversion changes the conversion rate.

  • If you are curious to see how “noisy” your control and testing system is or how much the conversion rate varies over time, run an A/A test. This is comparing the control against itself. Again, you have to make sure you run it long enough to see the true “noise floor.”

If you are interested to learn more about conversion rate optimization you can read my online Converison Rate Optimization Guide or download the CRO-eBook here.

Permalink

Most Recent Blogs

Log in to see this information

Either the content you're seeking doesn't exist or it requires proper authentication before viewing.