Feature Flag Experiments
GrowthBook allows you to run experiments using feature flags. This method of running experiments allows any feature to be released as an A/B test. It is ideal for more complex experiments requiring multiple code changes, or for companies that want to have an experimentation development culture, and determining the impact on your metrics of any feature or code change.
Running Experiments with Feature Flags
Feature flag experiments are created with an experiment override rule. This experiment rule will randomly assign
variations to your users based on the configurations you select. When a user is placed in an experiment via an experiment
override rule, the assignment will be tracked in your data warehouse using the trackingCallback
defined in the SDK
implementation.
Here's what an Experiment rule looks like in the GrowthBook UI:
This modal window allows for a great deal of flexibility and customization in how you run your experiments. Let's go through each of the options:
Experiment Targeting Conditions
Experiment rules can be targeted at specific user or client attributes. Only users who match the targeting condition will be included in experiment. You can add multiple targeting conditions per rule, and you can add multiple rules per feature, this gives you great flexibility in targeting and customizing your experiment to specific audiences. You can read more about targeting. By default, all users will be included.