Skip to main content

Rules (Force, Rollout, Experimentation)

Every feature has a default value that is served to all users. The real power comes when you define override rules that let you run experiments and/or change the value for specific users.

Feature override rules UI

Override rules are defined separately for each environment (e.g. dev and production). This way you can, for example, test an experiment rule in dev first before deploying it to production.

The first matching rule for a user will be applied, so order matters. If there are no matching rules, the default value will be used.

Targeting Conditions

Any rule can specify conditions to limit which users the rule applies to. These conditions are evaluated against the attributes passed into the SDK.

Learn more about targeting here.

Rule conditions UI

Forced Value

The simplest type of override rule is a "Forced Value" rule. This forces everyone who matches the targeting condition to get a specific value. For example, you could have a feature default to OFF and use force rules to turn it ON for a specific list of countries.

Force rule UI

Percentage Rollout

Percentage Rollout rules let you gradually release a feature value to a random sample of your users.

Rollout rule UI

Rollouts are most useful when you want to make sure a new feature doesn't break your app or site. You start by releasing to maybe 10% of users. Then after a while if your metrics look ok, you increase to 30% and so on.

For rollout rules, you choose a user attribute to use for the random sample. Users with the same attribute value will be treated the same (either included or not included in the rollout). For example, if you choose a "company" attribute, then multiple employees in the same company will get the same experience.

note

Percentage Rollouts do not fire any tracking calls so there's no way to precisely correlate the rollout to changes in your application's metrics. If this is a concern, we recommend Experiment rules (below) instead.

Experiments

The last type of rule is an Experiment. This randomly splits users into buckets, assigns them different values, and tracks that assignment in your data warehouse or analytics tool.

Experiments are most useful when you aren't sure which value to assign yet.

Here's what an Experiment rule looks like in the GrowthBook UI:

Experiment rule UI

In the vast majority of cases, you want to split traffic based on either a logged-in user id or some sort of anonymous identifier like a device id or session cookie. As long as the user has the same value for this attribute, they will always get assigned the same variation. In rare cases, you may want to use an attribute such as company or account instead, which ensures all users in a company will see the same thing.

You can control both the percent of users included and the traffic split between the variations. For example, if you include 50% of users and do a 40/60 split, then 20% will see the first variation, 30% will see the 2nd variation, and the remaining 50% will skip the rule entirely and move onto the next rule (or the default value if there are no more matching rules).

When a user is placed in an experiment via the experiment override rule, the SDK will track the experiment assignment in your data warehouse or analytics tool, via the trackingCallback method.

You can analyze the result of a Feature Experiment rule the same way you would any experiment in GrowthBook.

Namespaces

If you have multiple experiments that may conflict with each other (e.g. background color and text color), you can use namespaces to make the conflicting experiments mutually exclusive.

Users are randomly assigned a value from 0 to 1 for each namespace. Each experiment in a namespace has a range of values that it includes. Users are only part of an experiment if their value falls within the experiment's range. So as long as two experiment ranges do not overlap, users will only ever be in at most one of them.

Namespaces

In order to use namespaces, simply create a new namespace or modify an existing one in SDK Configuration → Namespaces in the GrowthBook UI's left navigation bar.

Testing Rules

You can test your rules in the GrowthBook UI by clicking on the "Test Feature Rules" under the rules. This will open a form that will allow you to adjust user attributes and see in real time how the rules will be applied, and what value they'll get. User attributes can also added as JSON objects, by clicking on the JSON tab. You can also expand the results to see more debug information about why each rule was or wasn't applied.

Test Feature Rules

Archetype

Archetypes are a way you can save preset user attributes to see how your if the rules will apply to them. This is useful if you have specific sets of users who you frequently want to target features to. They are automatically shown along with the feature values at the top of the test rules form. Mouse over any value to see more debug information about why that value was returned. Archetypes are part of the GrowthBook Enterprise plan.

Archetypes

From the testing form, you can click on the Save Archetype button to open the Create Archetype modal to create new archetypes.