We recently launched a new version of our A/B testing reports. With this, we aim to make it easier for our customers to analyze the impact of changes they are testing and help them make an informed decision about it. We overhauled the entire interface and gave users the control to customize the report dashboard according to their preferences.
Here is a list of everything that has been updated. Please note that these changes apply to A/B test, Mobile app A/B test, and Split test reports only.
The first thing you’ll notice on the report interface is that filters have been moved to the top of the report. You can now filter by date range or a visitor segment, and all the goals, graphs, and compared goals will be filtered.
The notification area will now notify you of the errors with a variation or goal tracking and also show if the test has a winner or a smart decision.
Stats table now only shows the median value for the Expected conversion rate, Expected revenue per visitor, Improvement, and Expected Revenue per Conversion. If you prefer your report with ranges for these metrics, you will be able to customize this in View Settings.
Also, you can add a few historical metrics to your reports from View Settings too. These metrics are calculated based on the data tracked in the test and are not computed using our Bayesian engine.
We have also removed Absolute Potential Loss from the stats table by default. It can be added again through View Settings.
There are two major updates in the Graph:
a. The conversion rate is added in the date range graph and is made our default graph. This option is changed using View Settings.
b. We launched a completely new graph - Probability density - which shows the varying probability of conversion rates for all variations of the test. This graph will help visualize the 99-percentile range of the conversion rate and to compare how close or far apart your variations are.
You can now compare test performance for multiple goals in one view. You can add up to 5 goals in one view.
We have depreciated the variation report, and we have added various filters to the goal view and goal compare view to ensure that you are able to create any version of the report per your preference.
Winners and Smart Decision
Note: To declare a variation as a winner or a smart decision, VWO requires a minimum of 25 conversions per variation, 1500 visitors for the test, and the test must be running for at least a week.
Previously, VWO suggested Smart Decision when the absolute potential loss associated with deploying the leading variation is less than the threshold of caring.
Now, along with ensuring that the potential loss of a variation is below the threshold of caring, the variation’s Probability to beat baseline should be higher than 95% for it to be Smart Decision.
Previously, if the difference between the lower limit of the winning variation was at least 1% higher than the upper limit of the losing variation, VWO declared the leading variation as the winner.
Now, the potential loss of the variation should be below the threshold of caring, and the Probability to be best for the variation should be higher than 95% for it to be Winner.
For the test setup process, you are asked to set up goals you want to track in the test. Along with the primary goal, you are asked about average monthly traffic, the current conversion rate of the primary goal, expected uplift from the changes, and the speed versus certainty mode. We have removed this part of the goal setup process and now use the data collected in the test to calculate the estimated duration and threshold of caring.
This estimated duration is available after one week of starting the test. You will see an option to calculate the duration manually in that duration.