This article covers the following: |
So, you've set up a feature flag in VWO and put it in your code. The next step is deciding who sees that feature and how.
Do you want to show it randomly to a small percentage of users at first, or maybe test different versions? Or do you want to show it only to a certain segment of users?
That’s where rules come into play.
What are rules?
Think of rules as the gatekeepers for your feature flag. You set up conditions, and these rules check each user against those conditions to decide:
- Should this user see the feature at all?
- If they do see it, which version should they get? (This is key for A/B tests or personalized experiences).
Rules let you specify who sees what, so you can include or exclude anyone from your feature rollout, test, or personalization campaign.
Why handle rules inside feature flags?
Make changes super fast without technical help:
Without integrated rules, you'd probably wait for a developer to code, test, and deploy the user-targeting configurations you want. Even if you use a separate in-house or competitor tool for targeting, you often still need coordination and face potential delays in getting changes live.
With rules, you can hop into VWO, adjust a slider or a setting, save, and it's live—seriously fast. No code deployment is needed. Your product managers or marketers can directly control who sees what and when.
Integrated experimentation & personalization setup:
Rolling out features to a subset of users, running A/B tests, and personalizing experiences can all happen in one smooth workflow when rules are built into your feature flags. You no longer need to juggle between different tools for targeting, testing, and personalization—it just works together seamlessly.
This setup removes extra steps, reduces delays, and puts control in the hands of those who need it. And if something goes wrong, you always have a safety switch ready.
Set up Rules
Prerequisites
Before you can create a rule, ensure you have the following:
- An active VWO account.
- The VWO Feature Management SDK is correctly installed and initialized in your application(s).
- A Feature Flag created in VWO FME that represents the feature you want to release, test, or personalize.
- This feature flag must have Variables defined and multiple Variations configured (if you want to A/B test), where each variation uses different values for those variables. Learn how to create a feature flag.
- Metrics relevant to your experiment's hypothesis must be defined in VWO (associated with the feature flag). Learn about setting up metrics.
- The appropriate user permissions within VWO to create and manage Feature Flags and Rules.
Getting Started with Rules
- Inside your VWO account, go to the feature flag you want to work with.
- Navigate to the Rules tab.
- Now, choose the Environment in which you want these rules to work. You'll see options like Production (for live users), Staging, or Development.
Using Different Types of Rules
Rollouts
A Rollout Rule allows you to gradually release a feature to your users with full control.
Instead of launching a new feature to 100% of your audience at once, you can define specific audience segments and percentages of traffic to receive the feature over time.
Here is how it helps:
Progressive release and risk mitigation: First, gradually expose a feature to a small subset of users (e.g., 1%, 5%, 10%). If problems occur, you can quickly roll back the feature for that small group instead of affecting your entire user base.
Infrastructure stress testing: Safely verify if your backend systems (servers, databases) can handle the load introduced by a new feature by gradually increasing the traffic accessing it.
Targeted releases: Roll out features specifically to certain platforms (e.g., iOS only, Web only) or user segments (e.g., Beta testers, users in a specific country, users on a specific subscription plan).
Faster, safer shipping: By reducing the risk associated with each release, teams gain confidence to ship smaller updates more frequently, leading to faster iteration cycles.
Gathering early feedback: Collect valuable qualitative and quantitative feedback from a smaller, controlled group before a full public launch.
Step-by-Step Guide to Setting Up a Rollout Rule
-
Select Rollout Rule: Make sure the Rollout tab is selected. Click Create new Rollout rule.
- Rule Name: Give your rule a clear and descriptive name that explains its purpose. For example, Rollout New Dashboard - Phase 1 (10% US Mobile Users).
- Description (Optional): Provide more details about the rule's function, which can be helpful for other team members. (For example, the redesigned user dashboard will be gradually rolled out, starting with 10% of Mobile users in the United States.)
-
Audience targeting: Use the segmentation engine to define the characteristics of the users you want to target.
You can segment based on:
- User attributes and properties like specific User IDs, OS, Browser, Location (Country, Region, City), and Device Type (Mobile, Desktop, Tablet).
- Custom Attributes you send to VWO via the SDK (e.g., planType, userRole, betaTesterStatus).
-
Linking Feature Flags - You can also target users based on whether another feature flag is enabled for them. This is useful for dependent features. For example, you have Flag A for a new backend service and Flag B for a new UI component that relies on that service. In the Rollout Rule for Flag B, you can add a condition only to target users for whom Flag A is currently ON.
Learn more about segmentation.
NOTE: For segments based on Browser, OS, or Device Type to work correctly when using VWO's FME SDKs, you must capture the necessary information (like the User-Agent string or device details - Learn more) in your application and explicitly pass it to the VWO SDK as custom attributes within the user context. VWO FME doesn't automatically capture browser/device details like the VWO Web Testing product.
- Allocate traffic percent: Once you've defined the target audience, use the slider or input box to set the percentage of that specific audience who should have the feature enabled by this rule.
-
Configure automation: Automate how the rollout percentage changes over time or based on key metrics. This provides hands-free progressive delivery and safety checks.
-
Time-Based Automation: Schedule percentage increases (or decreases) on specific dates. Examples:
- Start rollout at 10% on May 1st, 2025.
- Add step: Increase to 30% on May 8th, 2025.
- Add step: Increase to 60% on May 15th, 2025.
- Add step: Increase to 100% on May 22nd, 2025.
-
Metric-Based Automation: Automatically adjust the rollout percentage based on the performance of key metrics you are tracking with VWO. This acts as a safety net or performance gate. Examples scenarios:
- Safety Rollback: Start at 10%. If the Error Rate (metric) goes above 0.5% at any point, immediately roll back to 0%.
- Performance-Gated Increase: Start at 5%. If the Average Order Value (metric) remains stable or increases compared to baseline after 48 hours, increase rollout to 25%.
- Combined: Start at 10%. If the conversion rate improves by >2% after 3 days, it will increase to 40%. However, if the Crash Rate exceeds 1% at any time, roll back to 0%.
- You can combine multiple time-based and metric-based conditions to create sophisticated automation workflows.
-
Time-Based Automation: Schedule percentage increases (or decreases) on specific dates. Examples:
- Adjust salt value: Enter the Salt Value. This is an optional step. A salt is a text string that is combined with the User ID during the hashing process to determine whether a user falls into the allocated rollout percentage.
Without a defined salt (or if it changes), slight variations in the overall user list could potentially cause a user who previously got the feature at 10% to suddenly not get it, even if the percentage is still 10%. - Analyze feature impact - To analyze the impact of your feature rollout, do enable impact reports. Here is how you can do it. After creating rollout rules, check Analyze impact of the feature rollout checkbox.
Once the rollout is active, you can check the report under Flag Rollouts.
Experimentation
Experimentation rules enable you to run sophisticated A/B tests (and Multivariate tests) directly on features controlled by flags.
This powerful capability allows you to run complex experiments against key business metrics, such as search algorithms, AI model variations, checkout flows, user onboarding experiences, pricing strategies, API performance, and more.
Here is how it helps:
- Make data-driven decisions backed by statistical evidence.
- Validate hypotheses about feature improvements before committing to a full rollout.
- Optimize user experiences across multiple touchpoints consistently.
- Minimize the risk associated with launching new or modified features.
- Decouple the experimentation process from your regular code release schedule.
Step-by-Step Guide to Setting Up an Experimentation Rule
- Select Experimentation Rule: Make sure the Testing & Personalize tab is selected. Click Create new AB Testing rule.
-
Define the test details:
- Test Name: Enter a clear and descriptive name for your experiment. For example, “Experiment - Homepage Headline Optimization Q2".
- Description (Optional):Add more context about the test's purpose or objective.
- Hypothesis (Optional): Using VWO's Plan module for collaboration and planning, you can link an existing hypothesis here. This helps keep your experimentation efforts organized and tied to strategic goals.
-
Define the target audience: Using VWO's segmentation engine, specify which users should be included in this experiment. Learn more about segmentation options.
NOTE: For segments based on Browser, OS, or Device Type to work correctly when using VWO's FME SDKs, you must capture the necessary information (like the User-Agent string or device details - Learn more) in your application and explicitly pass it to the VWO SDK as custom attributes within the user context. VWO FME doesn't automatically capture browser/device details like the VWO Web Testing product. -
Configure traffic distribution and variations: Decide how traffic should be split among your feature flag's variations.
- Select Variations: Choose which of the variations defined in your feature flag you want to include in this specific A/B test.
-
Distribute Traffic: Assign the percentage of the targeted audience traffic to each selected variation. You have several options:
- Equal Distribution (Default): VWO automatically splits traffic evenly among all selected variations.
- Custom Distribution: Manually enter the traffic percentage for each variation. The total must sum to 100%.
- Auto-distribute (Multi-Armed Bandit): VWO dynamically allocates more traffic to variations that perform better over time, maximizing the performance of the selected goal during the experiment. (This is often used when you want to optimize conversions during the test itself.)
- Force Users into Variations (Optional): This option allows you to force specific users to see specific variations instead of random variations. Just mention the user IDs and the variations you want them to see.
-
Configure advanced settings (optional):
-
Configure the Testing Approach: Choose between,
- Fixed Horizon: The traditional approach where the experiment runs until a pre-calculated sample size or duration is met, after which statistical significance is determined.
- Sequential Testing: Allows you to analyze results and potentially reach a conclusion faster, as significance is evaluated continuously as data comes in. It's often suitable when speed is crucial, but requires careful interpretation.
- Apply Bonferroni Correction: This is recommended when testing multiple variations against a control. When making multiple comparisons, it adjusts the significance threshold to reduce the chance of incorrectly declaring a winner due to random fluctuations (false positives).
- Pause rule after conclusion: Check this box if you want VWO to automatically pause the experiment rule once a statistically significant winner is declared according to your settings.
- Enter Salt Value: It's highly recommended to enter a Salt Value (often the flag key itself or another consistent string). This ensures "sticky bucketing," meaning a specific user is consistently assigned to the same variation across multiple sessions, providing a better user experience during the test.
-
Configure the Testing Approach: Choose between,
- Save and activate: Once you've configured all the settings, save the rule. Ensure both the main Feature Flag and the Experimentation Rule are activated (set to "Running") for the test to start collecting data.
- Monitor results: Allow the experiment to run for a sufficient duration to collect meaningful data. You can monitor the performance of your variations and check for statistical significance in the VWO Reports section, which is under a subsection called Flag Experiments.
Personalization
You can use personalization rules to tailor experiences based on user interests. Think of them as a way to show specific versions (variations) of your feature to particular groups of users.
Instead of showing everyone the same thing or splitting traffic randomly, like in an A/B test, you can decide exactly who sees what based on their characteristics or behavior.
Here is how it helps:
- Make things more relevant: You can show content, offers, or features that make sense for specific users (like showing a local promotion to users in a certain city).
- Improve user experience: Tailor the experience based on how users interact with your app or site (e.g., show simpler options for new users).
- Boost engagement and conversions: Users are often more likely to engage or convert when things feel more relevant.
Step-by-Step Guide to Setting Up a Personalization Rule
- Select Personalization Rule: Go to the Testing & Personalize tab. Click Create new Personalize rule.
- Name your rule: Give it a clear name so you (and your team) know what it does. For example, show a welcome offer to New Users or a UK Geo-Targeted Banner. You can also add a description to provide more details.
-
Choose your target audience: This is key. Use VWO's segmentation options to define exactly which group of users this rule is for. Learn more about segmentation options.
NOTE: For segments based on Browser, OS, or Device Type to work correctly when using VWO's FME SDKs, you must capture the necessary information (like the User-Agent string or device details - Learn more) in your application and explicitly pass it to the VWO SDK as custom attributes within the user context. VWO FME doesn't automatically capture browser/device details like the VWO Web Testing product. -
Select the variation and traffic:
- Choose Variation: From the dropdown, pick the one specific variation (that you set up in your flag) that you want this target audience to see.
- Traffic Allocation: Set the percentage of your total VWO traffic that should be evaluated against this rule's targeting conditions. Often, you might set this to 100% for personalization, meaning all users will be checked to see if they match your target audience definition. If they match, they get the variation you selected.
-
Configure advanced settings (optional):
-
Assess visitor eligibility for variations on each visit: This option changes how "sticky" the personalization is:
- If checked (Enabled): VWO will re-check if the user meets your targeting conditions every time they start a new session or when their context updates. If their situation changes (for example, they move location, their plan type changes mid-session, and you update their context), they might see a different variation in their next visit if they no longer match the audience.
- If unchecked (Disabled - Default): Once a visitor qualifies for a variation based on this rule, they will keep seeing that same variation on subsequent visits, even if their details change later. VWO essentially "remembers" the first match for that user within this rule. For more information, see Configure multi-target mode for your Visitors.
- Enter Salt Value: Like with other rules, adding a Salt Value helps ensure VWO consistently processes users, which can be important for stability, especially if complex setups exist. You can often just use the flag key.
-
Assess visitor eligibility for variations on each visit: This option changes how "sticky" the personalization is:
- Save and activate: Click Save to finish creating the rule. Ensure the main Feature Flag and your new Personalization Rule are activated ("Running") so it starts working for your users.
Rules Debugger
Before you try your rules on real users, you must double-check that they work as expected. The debugger is helpful here!
Key Things to Remember
Just two key things to keep in mind as you set up your rules:
Define Priority by Structuring Rules in Order
VWO checks the rules on the list from top to bottom. You can drag and drop rules to change their order.
The first rule a user matches is the one that counts. As soon as VWO finds a rule that fits the user, it stops checking any further rules below it in that list.
Why? Suppose you had that mobile/web offer example. If you wanted to make sure people who saw the offer on the website didn't see it again on mobile, you'd likely put the website rule first and maybe add a condition to the mobile rule to exclude those users.
Activate Rollout Rule before Testing and Personalization
To consider your A/B testing or personalization rules, the user must first pass through an enabled rollout rule. Think of rollout rules as the main gate.
If you don't have any rollout rules set up, but the main feature flag is switched on, then your testing and personalization rules will work for everyone who encounters the flag.
Troubleshooting
-
Data not appearing in reports:
- Verify the VWO SDK is correctly implemented and initialized before feature flag checks occur.
- Ensure the correct userContext (with User ID and any necessary custom attributes) is passed.
- Confirm both the Feature Flag and the Rule are active ("Running").
- Check that your application code correctly uses the variable values obtained from getVariable to render the different experiences.
-
Inconclusive results (low significance):
- The experiment may need more traffic or a longer duration.
- The change being tested might not significantly impact the chosen primary metric.
- Consider if the Minimum Detectable Effect (MDE) you aim for is realistic given your traffic levels.
-
Users seeing inconsistent variations:
- This usually points to issues with bucketing. Ensure you have set a consistent salt value for the rule in the advanced settings.
- Also, ensure the userID being passed in the userContext is stable and consistent for the same user across sessions.
-
User not seeing the correct variation:
- Double-check the SDK implementation. Is it initialized correctly? Is the userContext being passed with the right User ID and any custom attributes needed for your segmentation?
- Verify targeting logic. Are the segmentation rules defined correctly in VWO? Do they exactly match the characteristics of the user you're testing with?
- Check the rule conflicts/order. If a user could match multiple rules, VWO applies rules based on their order or internal logic. Ensure your rules aren't conflicting unintentionally.
FAQs
-
What happens if a user qualifies for multiple rules on the same flag?
VWO processes rules based on their order or specific internal logic. Generally, the first rule a user qualifies for might determine the variation they see.
- What's the difference between a Rollout rule and an Experimentation rule?
A Rollout rule is primarily for gradually and safely releasing a feature to increasing percentages of users, often based on audience segments or schedules. Its main goal is controlled exposure.
An experimentation rule compares two or more feature variations against each other based on performance metrics. Its main goal is to determine which version performs best using statistical analysis. While it involves showing features to users, the core purpose is data collection for comparison. - How does VWO determine a winner in an experiment?
VWO uses a Bayesian statistical approach. It calculates the probability of each variation being better than the control (or other variations) for the selected primary metric.
A winner is declared when this probability crosses a predefined confidence threshold (often 95% or higher), indicating a high degree of certainty that the observed difference is not due to random chance. You often have flexibility in adjusting some statistical parameters. You can learn more about VWO's stats engine here. - Can I change an experiment after it starts?
While VWO might allow some modifications, it's generally not recommended to change core aspects of a running experiment (like traffic distribution, variations included, or the primary metric).
Doing so can invalidate the collected data, reset learning periods (for MAB), potentially re-bucket users inconsistently, and make the final results unreliable or difficult to interpret. If significant changes are needed, it's better to stop the current experiment and start a new one. - Where do I see the results of my experiment?
Experiment results are available within the VWO platform's Reports section. Look for subsections related to Flag Testing, where you can select your specific experiment to view performance data, statistical significance, and metric conversion details for each variation. -
Why is my experiment taking so long to reach significance?
Several factors influence the time needed:
- Traffic Volume: Lower traffic to the experiment means slower data collection.
- Conversion Rate: Lower baseline conversion rates for your primary metric require larger sample sizes.
- Magnitude of Difference (Lift): Smaller differences between variations take longer to detect reliably than larger differences.
- Statistical Thresholds: Higher confidence levels (e.g., 99% vs. 95%) require more data.
- Check VWO's duration calculator or report projections for estimates based on your current data.
- How is personalization different from an A/B test?
A/B testing compares variations to see which performs better, usually by randomly splitting traffic. Personalization is about delivering a specific, pre-defined variation to a targeted group because you believe it's the most relevant experience for them. You're not typically measuring which variation is "better" in a personalization rule; you're aiming for relevance. - Can I target multiple variations in one personalization rule?
No, a single personalization rule targets one specific audience segment and assigns them one specific variation. If you want to show different variations to different segments, you will create separate personalization rules for each segment-variation pair.
Need more help?
For further assistance or more information, contact VWO Support.