A/B Testing Calculator

Discover which variation of your a/b test is statistically proven to win. Just enter your visitors and conversions below.

Visitors

Conversions

Conversion Rate

A

25%

B

28%

Significance Questionable

Variant B's conversion rate is 12% higher than Variant A's conversion rate.

You can be 93% confident that Variant B will perform better than Variant A.

What is A/B Testing?

A/B testing is an experiment that examines two variables, A and B, of a website. In a more precise way, it refers to the act of comparing two varying elements against each other by showing it to different visitors at the same time to see the one that drives more conversion.

Besides, this process eliminates the doubts associated with website creation and optimization. Instead of judging based on your interests or what you feel is right, you can make data-backed decisions based on the test result.

What is Statistical Significance?

Statistical significance is a term used to determine the level of certainty that the results of a given test are not due to a sampling error. This test helps researchers and marketers state that it is unlikely their observations could have occurred under the null hypothesis of a statistical test.

Researchers often denote it by a p-value or probability value, usually set at a threshold of less than 0.05. This value means the data is likely to occur less than 5% of the time under the null hypothesis.

In this situation, when the p-value falls below the chosen alpha value, then we say the result of the test is statistically significant. Statistical significance shows that a relationship between two or more variables is caused by something other than chance. In contrast, statistical hypothesis testing helps determine whether the result of a data set is statistically significant.

How to Determine Significance in A/B Test?

In A/B testing, the data sets considered are the number of users and the number of conversions for each variation. Statistical significance helps to prove that an A/B test was successful or unsuccessful. It’s impossible to determine whether a test’s result is due to the change made or sampling error if you only look at the conversion rate difference.

Ideally, the A/B test gets as much as 95% statistical significance, while the very least is at 90%. A value above 90% ensures either a negative or positive impact on your site’s performance. It is advisable to test pages with a high amount of traffic or a high conversion rate.

At Pixl Labs, our calculator only requires you to input 4 data points to determine your test’s statistical significance. You only need to know data for control visitors and control conversions for the first test (say A), variant visitors and variant conversions for the second test (say B). To obtain results from the calculator, you need to enter the data into the appropriate space on it. This calculator will fetch your conversion rate and significant results.

How Many Visitors Do You Need For Your A/B Test?

It is hard to determine the exact number of visitors you need to run a proper A/B test. There is no fixed number which is universally accepted. However, expert marketers recommend you reach a minimum of 5000 unique visitors per variation and 100 conversions on each objective by variation, or you get a reliability rate of 95%.

How to Calculate A/B Testing Sample Size?

Assessing how many visitors will be needed for a test you plan to run in the future is more complicated than evaluating a previous test’s significance. To work out the AB testing sample size you need, you can use a sample size calculator.

Here are some general guidelines that can help you estimate this number:

Some experts agree it is challenging to get an uplift of more than 10% on a single webpage.

Changing your offers, rebranding, lowering your prices, or restructuring your website are changes you can make to achieving uplift beyond 10%. However, changes such as adjusting a button color, headline, or image do have less than 7% impact. This kind of change is small and might be insignificant at times.

Some A/B Testing Settings 

Below are the vital A/B testing settings you should understand when using a significance calculator.

Hypothesis (of two sides):

It is important to carry out a two-sided hypothesis, which means you are testing if variant B is different from variant A – either better or worse. This form is different from a one-sided hypothesis, which would only test if B is better than A.

Statistical Power: 80-90%

It is the probability that you will find a difference in conversion rate if any exists. In other words, it is the inverse of the probability of committing a Type 2 error. It depends on your sample size and the type of results you get. If you want a higher statistical Power in your result, you will need a larger sample size.

Statistical Confidence: 95%

Statistical confidence is the probability that a difference observed in your data is a real effect and not a Type 1 error (an effect observed when no real effect exists). It is common to use a Confidence of around 95%. That means the chance of seeing a result when one doesn’t exist is 0.05 (or 5%).

Learn more about A/B Testing