|
This article covers the following: |
Overview
VWO provides a rich set of insights in your campaign reports to help you make data-driven decisions. Along with performance metrics, probability values, and confidence thresholds, you may also see warnings, recommendations, and informational messages in the reports.
This article explains what each recommendation, error message, and warning means and the recommended actions you must take.
Understand Early Wins and Statistical Significance
Before reviewing specific indicators, it is important to understand how VWO calculates results. VWO tests are designed to identify a winner as early as possible, saving you time and traffic.
The following concepts explain how VWO accelerates result evaluation and why certain recommendations may appear earlier than expected:
-
The Winner Threshold: As soon as the probability of improvement crosses your defined threshold (default is 95%), the variation is considered a statistically significant winner.
-
Beating the Estimate: Winning recommendations often arrive earlier than the estimated duration. This happens when the actual uplift provided by a variation is larger than the Minimum Detectable Effect (MDE) you initially planned for.
- First Across the Line: In experiments with multiple variations, the variation that reaches statistical significance first is typically the one with the highest uplift at that point in time. VWO declares a conclusion as soon as a winner is identified. However, before the campaign duration ends, the winning variation can change if another variation later crosses the probability threshold with a higher expected improvement.
Using this context on how VWO accelerates results, the following sections explain the specific messages, errors, and recommendations you may encounter in your report.
Understand Warnings, Recommendations, and Error Messages in Campaign Reports
Campaign reports display a variety of indicators based on how your campaign is performing, how data is being tracked, and the statistical reliability of your results.
These indicators help you:
- Detect tracking or campaign setup issues early
- Understand statistical reliability
- Recognize unstable data
- Identify winning or underperforming variations
- Interpret filtered or dynamic traffic data
- Know when you can safely stop a test
Each section below provides a comprehensive list of all messages you may see in your reports, along with their meanings.
Performance-Based Recommendations
These messages appear when VWO's statistical engine has analyzed the performance of your variations relative to the baseline.
In-app Indicator |
Warning/Message |
Root Cause & Meaning |
Recommendation Action |
|
Winning Recommendation
(Variation X is better than baseline) |
Cause: The Probability of Improvement is significantly higher for a specific variation. The result is declared with high confidence before the scheduled campaign duration ends.
Meaning: One variation is clearly performing better than the original. |
Pause & Deploy: You can safely pause the campaign to stop further data collection. Implement the winning variation on your site to start capitalizing on the improvement immediately. | |
|
Disable Recommendation
(Variation X is statistically NOT better than baseline) |
Cause: Statistical analysis determines that a variation is very unlikely to perform better than the baseline, even if more data is collected. The system issues this warning if the probability of improvement goes below the disable threshold (5% by default).
Meaning: Continuing to run this variation is a waste of traffic. |
Disable Variation: To speed up your test, disable this specific variation. Disabling the variation focuses your traffic on the remaining, better-performing variations, allowing you to conclude the test faster. | |
|
Stick to Baseline
(Stick to Variation X Baseline as no variation shows the potential to outperform the baseline.) |
Cause: The campaign has reached its visitor limit, and the probability of improvement for all variations is below the disable threshold (5% by default).
Meaning: The observed improvements are not statistically reliable. |
Accept Baseline: If the baseline performs adequately and no variation reaches significance, you can safely conclude the test and stick with the existing experience. Only start a new test if you have a hypothesis for a stronger impact. | |
|
No Significant Winner
(No significant winner) |
Cause: The campaign hit its planned data limit without a clear winner.
Meaning: Results are inconclusive. |
Decision Time:
|
Experiment Vitals and Tracking Issues
These are critical alerts indicating that data might not be flowing correctly. If you see these, immediate investigation is usually required.
Warning/Message |
Root Cause & Meaning |
Recommended Action |
|
Data Tracking Warning
(No visitors became part of the campaign in the first 6 hours)
(Campaign results might be delayed due to low traffic) |
Cause: No visitors have landed on the campaign URL, or there is an issue with the VWO SmartCode execution.
Meaning: No data is being collected. |
Check Setup:
|
|
Conversion Tracking Warning
(No Conversions were tracked for this metric in the first 6 hours) |
Cause: Visitors are being tracked, but no successful goal completions are being recorded.
Meaning: Likely a metric or goal setup issue or a script blocking the conversion event. |
Verify Metrics/Goals:
|
|
Invalid Metric Sum
(Conversions are recorded, but the total value is 0. We recommend reviewing the metric or goal setup.) |
Cause: Conversions are occurring, but the revenue or value associated with them is either missing or zero.
Meaning: Your revenue tracking code is likely misconfigured. |
|
|
Baseline/Variation at 100% Conversion Rate
(Conversion rate for one of the variations is 100%, which is unusual and may indicate a setup issue. We recommend reviewing the configuration.) |
Cause: Every single visitor to a variation (or baseline) is being recorded as a conversion.
Meaning: This is statistically impossible in real-world scenarios. It indicates a setup error. For example, the goal is achieved immediately on the landing page. |
Immediate Review: Check your goal definition. Ensure the conversion goal is not set to the same URL as the campaign URL. |
Duration and Timeline Alerts
These messages help you understand the estimated time to significance and ensure you do not stop a test too early or let it unnecessarily run for too long.
Warning/Message |
Root Cause & Meaning |
Recommended Action |
|
Minimum Runtime Alert
(VWO recommends waiting at least 7 days before a recommendation to account for potential weekly visitor behavior patterns) |
Cause: VWO recommends capturing a full business cycle (weekly visitor behavior patterns).
Meaning: Stopping early might let a specific day, for example, a high-traffic Sunday, skew your results. |
Wait: For the most reliable results, let the test run for at least 7 days. If you are confident your business has no weekly seasonality, you can ignore this.
Longer-Term Variations: If you anticipate performance impact to extend beyond a week, wait until that period has passed before making a decision, and ignore interim recommendations. |
Configuration and Conduct Warnings
These alerts appear when changes are made to a live test or when specific report modes are active.
Warning/Message |
Root Cause & Meaning |
Recommended Action |
|
Experimentation Conduct
(Fault in experiment configuration as Metric was changed in a running campaign.) |
Cause: Critical settings (metrics, traffic distribution, enabling variations, and the content of the variation) were changed while the test was live.
Meaning: Results may be biased or unreliable because the metric definition changed halfway through. |
Restart Suggested: For strict scientific accuracy, clone the campaign and start fresh (if you want to retain the existing data) or flush the data of the existing campaign and restart it. For more information on how to clone a campaign, see Cloning a Test in VWO.
If you continue, annotate the date of the change in your analysis. |
|
Guardrails Alert
(Guardrail checks will begin after the minimum data is collected.) |
Cause: One or more guardrails that you set have been breached, which means that a variation underperformed for a metric you deemed crucial or a variation is negatively impacting a Guardrail Metric, for example, Page Load Time or Crash Rate.
Meaning: While a variation might increase conversions, it may compromise the user experience. |
Pause Immediately: If a guardrail is triggered, pause the specific variation to prevent negative impact on your user experience or business goals.
If the variation is not disabled automatically, you should disable it and direct more traffic to the rest for a faster conclusion. |
|
Post-Segmentation Filter
(Duration, Recommendation, and Conclusion are not applicable on filtered data) |
Cause: If you apply a filter after a test has run, the system analyzes a subset of the original data. Since the filtered data does not include everything from the original test, certain metrics that rely on the full data set (such as Duration and Conclusion) cannot be accurately calculated.
Meaning: Metrics such as Estimated Duration rely on the full dataset and cannot be calculated for a subset. |
Informational: This is expected behavior. Use filters for insights, but rely on the main report for conclusion timelines. |
|
Dynamic Traffic Allocation
(Note that Duration and Conclusion are not available in the Dynamic Traffic Allocation Mode. It is used for maximizing conversions in a running campaign.) |
Cause: You are using Multi-Armed Bandit (MAB) logic to route traffic to the best performer automatically.
Meaning: Because traffic is not split 50/50, the system does not calculate standard statistical conclusions (p-values) in the same way. |
Focus on Conversions: Understand that this mode prioritizes yield (conversions) over knowledge (statistical significance). |
| Expected Value/Improvement Missing |
Cause: This can occur due to one or more of the following reasons:
Meaning: The system is unable to calculate the expected improvement value for your test results, and the report cannot calculate the specific lift value at this moment. |
Configure Metrics: Set an expected improvement value for your test metric.
Review Data: Ensure sufficient historical data is available for accurate calculations. For more information, see Interpreting Expected Improvement in Testing.
Contact Support: If you have sufficient visitor data and still see this, please contact VWO Support to verify if it is a display bug. |
Account and System Warnings
These alerts are related to your VWO account status or specific system limitations.
Warning/Message |
Root Cause & Meaning |
Recommended Action |
| Quota Reached/Subscription Ended |
Cause: The account has exhausted its purchased visitor quota, or the subscription has expired.
Meaning: The campaign is effectively paused and not collecting new data. |
Upgrade/Renew: Contact your Account Manager to increase the visitor limit or renew the subscription to resume testing. |
By understanding these recommendations and addressing any accompanying vitals, you can leverage VWO reports to make informed decisions that optimize your website or app.
Need more help?
For further assistance or more information, contact VWO Support.