We recently launched a new version of our A/B testing reports. With this, we aim to make it easier for our customers to analyze the impact of changes they are testing and help them make an informed decision about it. We overhauled the entire interface and gave users the control to customize the report dashboard according to their preferences.
Here is a list of everything that has been updated. Please note that these changes apply to the A/B test, Mobile app A/B test, and Split test reports only.
Report Interface
Filters
The first thing you’ll notice on the report interface is that filters have been moved to the top of the report. You can now filter by date range or a visitor segment, and all the goals, graphs, and compared goals will be filtered.
Notifications
The notification area will notify you of the errors with a variation or goal tracking and show if the test has a winner or a smart decision.
Stats table
Stats table now only shows the median value for the Expected conversion rate, Expected revenue per visitor, Improvement, and Expected Revenue per Conversion. If you prefer your report with ranges for these metrics, you will be able to customize this in View Settings.
Also, you can add a few historical metrics to your reports from View Settings too. These metrics are calculated based on the data tracked in the test and are not computed using our Bayesian engine.
We have also removed Absolute Potential Loss from the stats table by default. It can be added again through View Settings.
Graphs
There are two major updates in the Graph:
a. The conversion rate is added in the date range graph and is made our default graph. This option is changed using View Settings.
b. We launched a completely new graph - Probability density - which shows the varying probability of conversion rates for all test variations. This graph will help visualize the 99-percentile range of the conversion rate and to compare how close or far apart your variations are.
Compare goals
You can now compare test performance for multiple goals in one view. You can add up to 5 goals in one view.
Variation report
We have depreciated the variation report, and we have added various filters to the goal view and goal compare view to ensure that you can create any version of the report per your preference.
Winners and Smart Decision
Smart Decision
Previously, VWO suggested Smart Decision when the absolute potential loss associated with deploying the leading variation is less than the threshold of caring.
Now, along with ensuring that the potential loss of a variation is below the threshold of caring, the variation’s Probability to beat baseline should be greater than or equal to 95% for it to be a Smart Decision.
Winner
Previously, if the difference between the lower limit of the winning variation was at least 1% higher than the upper limit of the losing variation, VWO declared the leading variation as the winner.
Now, the potential loss of the variation should be below the threshold of caring, and the Probability to be best for the variation should be greater than or equal to 95% for it to be Winner.
Test Setup
For the test setup process, you are asked to set up goals you want to track in the test. Along with the primary goal, you are asked about average monthly traffic, the current conversion rate of the primary goal, expected uplift from the changes, and the speed versus certainty mode. We have removed this part of the goal setup process and now use the data collected in the test to calculate the estimated duration and threshold of caring.
This estimated duration is available after one week of starting the test. You will see an option to calculate the duration manually in that duration.