A/B Testing Calculator

Discover which variation of your a/b test is statistically proven to win. Just enter your visitors and conversions below.

A

Conversion Rate

25%

B

28%

Significance Questionable

Variant B's conversion rate is 12% higher than Variant A's conversion rate.

You can be 93% confident that Variant B will perform better than Variant A.

What is A/B Testing?

A/B testing is a method of comparing two versions of a product or website to determine which one performs better. It is commonly used in the fields of marketing and web design to optimize conversion rates and improve user experience.

To conduct an A/B test, you would need to create two versions of the product or website, called the "A" version and the "B" version. These two versions should be as similar as possible, except for the one change that you want to test. For example, you might want to compare two versions of a website's home page, where the only difference is the layout or the color scheme.

Once you have created the two versions, you would then randomly split your target audience into two groups and show each group one of the versions. You would then track how each group interacts with the product or website, and compare the results to see which version performs better.

A/B testing can be a powerful tool for improving the effectiveness of your product or website, as it allows you to make data-driven decisions about what changes are most likely to be successful. It is important, however, to carefully design and conduct your A/B test to ensure that the results are reliable and accurately reflect the impact of the changes being tested.

Besides, this process eliminates the doubts associated with website creation and optimization. Instead of judging based on your interests or what you feel is right, you can make data-backed decisions based on the test result.

What is Statistical Significance?

Statistical significance is a term used to determine the level of certainty that the results of a given test are not due to a sampling error. This test helps researchers and marketers state that it is unlikely their observations could have occurred under the null hypothesis of a statistical test.

Researchers often denote it by a p-value or probability value, usually set at a threshold of less than 0.05. This value means the data is likely to occur less than 5% of the time under the null hypothesis.

In this situation, when the p-value falls below the chosen alpha value, then we say the result of the test is statistically significant. Statistical significance shows that a relationship between two or more variables is caused by something other than chance. In contrast, statistical hypothesis testing helps determine whether the result of a data set is statistically significant.

How to Determine Significance in A/B Test?

In A/B testing, the data sets considered are the number of users and the number of conversions for each variation. Statistical significance helps to prove that an A/B test was successful or unsuccessful. It’s impossible to determine whether a test’s result is due to the change made or sampling error if you only look at the conversion rate difference.

Ideally, the A/B test gets as much as 95% statistical significance, while the very least is at 90%. A value above 90% ensures either a negative or positive impact on your site’s performance. It is advisable to test pages with a high amount of traffic or a high conversion rate.

At Pixl Labs, our calculator only requires you to input 4 data points to determine your test’s statistical significance. You only need to know data for control visitors and control conversions for the first test (say A), variant visitors and variant conversions for the second test (say B). To obtain results from the calculator, you need to enter the data into the appropriate space on it. This calculator will fetch your conversion rate and significant results.

How Many Visitors Do You Need For Your A/B Test?

The number of visitors you need for your A/B test will depend on several factors:

  • the size of the effect you are trying to detect
  • the level of precision you want to achieve
  • the level of statistical significance you are aiming for

In general, the larger the effect size you are trying to detect and the higher the level of precision and statistical significance you want to achieve, the more visitors you will need for your A/B test.

As a rough rule of thumb, you may need at least several hundred or even several thousand visitors per variation in order to detect small to moderate effect sizes with a reasonable level of precision and statistical significance.

How to Calculate A/B Testing Sample Size?

Assessing how many visitors will be needed for a test you plan to run in the future is more complicated than evaluating a previous test’s significance. To work out the AB testing sample size you need, you can use a sample size calculator.

Here are some general guidelines that can help you estimate this number:

Some experts agree it is challenging to get an uplift of more than 10% on a single webpage.

Changing your offers, rebranding, lowering your prices, or restructuring your website are changes you can make to achieving uplift beyond 10%. However, changes such as adjusting a button color, headline, or image do have less than 7% impact. This kind of change is small and might be insignificant at times.

Some A/B Testing Settings 

Below are the vital A/B testing settings you should understand when using a significance calculator.

Hypothesis (of two sides):

It is important to carry out a two-sided hypothesis, which means you are testing if variant B is different from variant A – either better or worse. This form is different from a one-sided hypothesis, which would only test if B is better than A.

Statistical Power: 80-90%

It is the probability that you will find a difference in conversion rate if any exists. In other words, it is the inverse of the probability of committing a Type 2 error. It depends on your sample size and the type of results you get. If you want a higher statistical Power in your result, you will need a larger sample size.

Statistical Confidence: 95%

Statistical confidence is the probability that a difference observed in your data is a real effect and not a Type 1 error (an effect observed when no real effect exists). It is common to use a Confidence of around 95%. That means the chance of seeing a result when one doesn’t exist is 0.05 (or 5%).