Node.js
We officially support Node 18 and above.
Installation
Install with a package manager
- npm
- Yarn
- pnpm
npm install --save @growthbook/growthbook
yarn add @growthbook/growthbook
pnpm add @growthbook/growthbook
Quick Usage
First, create and instantiate a singleton GrowthBook client. You can do this as part of your server startup script.
const gbClient = new GrowthBookClient({
apiHost: "https://cdn.growthbook.io",
clientKey: "sdk-abc123"
});
await gbClient.init({timeout: 3000});
Then, you can evaluate a feature anywhere in your app by providing a feature key and a userContext
.
// User context with attributes that feature flags can use for targeting
const userContext = {
attributes: {
id: "123",
}
}
// Boolean on/off flags
if (gbClient.isOn("my-feature", userContext)) {
console.log("My feature is on!");
}
// String, Number, or JSON flags
const value = gbClient.getFeatureValue("my-string-feature", "fallback", userContext);
console.log(value);
Express Middleware
In Express, it can be helpful to add a user-scoped GrowthBook instance. This will let you define a userContext
object once instead of passing it into every feature evaluation call throughout your app.
This can be accomplished easily with a middleware.
app.use((req, res, next) => {
// Define your user context once
const userContext = {
attributes: {
url: req.url,
id: req.user.id
}
}
// Create a scoped GrowthBook instance and store in req
req.growthbook = gbClient.getScopedInstance(userContext);
next();
});
Now, you can use this scoped instance from any route without needing to pass in userContext
again.
app.get("/", (req, res) => {
// No need to pass in userContext to a scoped instance
req.growthbook.isOn("my-feature");
req.growthbook.getFeatureValue("my-string-feature", "fallback");
});
Scoped instances were first introduced in SDK version 1.3.1.
GrowthBook class (legacy)
The GrowthBookClient
class was introduced in SDK version 1.3.0 to improve performance in back-end environments by up to 3x. It does this by re-using the core instance across many different user requests.
For backwards compatibility, we still support the old way of doing things with the GrowthBook
class, where you would create a new instance for every incoming request.
// Example using Express
app.use(function(req, res, next) {
// Create a GrowthBook instance and store in the request
req.growthbook = new GrowthBook({
apiHost: "https://cdn.growthbook.io",
clientKey: "sdk-abc123",
// Also include user/route specific settings
attributes: {
id: req.user.id
}
});
// Clean up at the end of the request
res.on('close', () => req.growthbook.destroy());
// Wait for features to load (will be cached in-memory for future requests)
req.growthbook.init({timeout: 1000}).then(() => next())
});
The GrowthBook
class is identical to the one used in client-side environments. Check out the Client-side JavaScript SDK docs for more info on all of the settings and methods available.
Loading Features and Experiments
In order for the GrowthBook SDK to work, it needs to have feature and experiment definitions from the GrowthBook API. There are a few ways to get this data into the SDK.
Built-in Fetching and Caching
If you pass an apiHost
and clientKey
into the GrowthBookClient
constructor, it will handle the network requests, caching, retry logic, etc. for you automatically.
const gbClient = new GrowthBookClient({
apiHost: "https://cdn.growthbook.io",
clientKey: "sdk-abc123",
});
// Wait for features to be downloaded with a timeout (in ms)
await gbClient.init({ timeout: 2000 });
Error Handling
In the case of network issues that prevent the features from downloading in time, the init
call will not throw an error. Instead, it will stay in the default state where every feature evaluates to null
.
You can still get access to the error if needed:
const res = await gbClient.init({
timeout: 1000
});
if (res.error) {
throw res.error;
}
The return value of init has 3 properties:
- success -
true
if the GrowthBook instance was populated with features/experiments. Otherwisefalse
- source - Where this result came from. One of the following values:
network
,cache
,init
,error
, ortimeout
- error - If status is
false
, this will contain anError
object with more details about the error
Custom Integration
If you prefer to handle the network and caching logic yourself, you can pass in a full JSON "payload" directly into the SDK. For example, you might store features in Postgres or Redis.
await gbClient.init({
payload: {
features: {
"feature-1": {...},
"feature-2": {...},
"another-feature": {...},
}
}
})
The data structure for "payload" is exactly the same as what is returned by the GrowthBook SDK endpoints and webhooks.
Note: you don't need to specify clientKey
or apiHost
on your GrowthBook instance since no network requests are being made in this case.
Synchronous Init
There is a alternate synchronous version of init named initSync
, which can be useful in some environments. There are some restrictions/differences:
- You MUST pass in
payload
- The
payload
MUST NOT have encrypted features or experiments - The return value is the GrowthBook instance to enable easy method chaining
const gbClient = new GrowthBookClient().initSync({
payload: {
features: {...}
}
});
Refreshing Features
By default, the GrowthBookClient will only fetch features once during initialization. This works great for short-running processes (e.g. serverless functions).
For long-running Node.js processes, there are 2 main approaches to keeping feature defintiions up-to-date.
- Streaming
- Polling
Streaming Updates
The GrowthBook SDK supports streaming with Server-Sent Events (SSE). When enabled, changes to features within GrowthBook will be streamed to the SDK in realtime as they are published. This is only supported on GrowthBook Cloud or if running a GrowthBook Proxy Server.
Node.js does not natively support SSE, but there is a small library you can install:
- npm
- Yarn
- pnpm
npm install --save eventsource
yarn add eventsource
pnpm add eventsource
Then, configure the polyfill and enable streaming during init:
const { setPolyfills } = require("@growthbook/growthbook");
// Configure GrowthBook to use the eventsource library
setPolyfills({
EventSource: require("eventsource"),
});
const gbClient = new GrowthBookClient({
clientKey: "sdk-abc123"
});
// Enable streaming in the init call
await gbClient.init({ streaming: true });
This will make an initial network request to download the features payload from the GrowthBook API. Then, it will open a streaming connection to listen to updates.
Every call to evaluate feature flags will use the latest available payload.
Polling Updates
If your environment doesn't support streaming updates or you want a simpler option, you can use a polling approach:
// Refresh once every 5 minutes
setInterval(() => gbClient.refreshFeatures(), 5*60*1000);
We don't recommend polling more often than once per minute. If you need faster updates, use Streaming (above).
Caching
The JavaScript SDK has 2 caching layers:
- In-memory cache (enabled by default)
- Persistent localStorage cache (disabled by default, requires configuration)
Configuring Local Storage
Here is an example of using Redis as your persistent localStorage cache:
const { setPolyfills } = require("@growthbook/growthbook");
setPolyfills({
localStorage: {
// Example using Redis
getItem: (key) => redisClient.get(key),
setItem: (key, value) => redisClient.set(key, value),
}
});
Cache Settings
There are a number of cache settings you can configure within GrowthBook.
Below are all of the default values. You can call configureCache
with a subset of these fields and the rest will keep their default values.
import { configureCache } from "@growthbook/growthbook";
configureCache({
// The localStorage key the cache will be stored under
cacheKey: "gbFeaturesCache",
// Consider features stale after this much time (60 seconds default)
staleTTL: 1000 * 60,
// Cached features older than this will be ignored (24 hours default)
maxAge: 1000 * 60 * 60 * 24,
// Set to `true` to completely disable both in-memory and persistent caching
disableCache: false,
})
Skip Cache
The cache layers apply to both the init
and refreshFeatures
methods. Both of these accept a skipCache: true
option to bypass the cache layers if desired.
Experimentation (A/B Testing)
In order to run A/B tests, you need to set up a tracking callback function. This is called every time a user is put into an experiment and can be used to track the exposure event in your analytics system (Segment, Mixpanel, GA, etc.).
You can specify this globally in your GrowthBookClient class and/or per-request in your user context.
We recommend using a global callback if you plan to track events directly from Node and using a user context callback if you plan to send events to the front-end to fire.
When specified globally, the callbacks will receive an additional argument with the user context that triggered the event.
// Callback configured globally
const gbClient = new GrowthBookClient({
trackingCallback: (experiment, result, userContext) => {
const userId = userContext.attributes.id;
console.log("Viewed Experiment", userId, {
experimentId: experiment.key,
variationId: result.key
});
}
});
// Callback configured in the user context
const userContext = {
attributes: {
id: req.user.id
},
trackingCallback: (experiment, result) => {
console.log("Viewed Experiment", req.user.id, {
experimentId: experiment.key,
variationId: result.key
});
}
}
Feature Flag Experiments
There is nothing special you have to do for feature flag experiments. Just evaluate the feature flag like you would normally do. If the user is put into an experiment as part of the feature flag, it will call the trackingCallback
automatically in the background.
// If this has an active experiment and the user is included,
// it will call trackingCallback automatically
const newLogin = gbClient.isOn("new-signup-form", userContext);
If the experiment came from a feature rule, result.featureId
in the trackingCallback will contain the feature id, which may be useful for tracking/logging purposes.
Deferred Tracking
Sometimes, you aren't able to track analytics events from Node.js and you need to do it from the front-end instead.
In this case, define a trackingCallback in the user context, queue up events, and serialize it in your response. Here's an example:
// Middleware
app.use((req, res, next) => {
// Queue up tracking calls and store in the request
req.trackingData = [];
const userContext = {
attributes: {id: "123"},
trackingCallback: (experiment, result) => {
req.trackingData.push({experiment, result});
}
}
req.growthbook = gbClient.getScopedInstance(userContext);
next();
});
// Serialize and include in your response
app.get("/", (req, res) => {
res.send(`<html>
<head>
<script>
(function() {
const data = ${JSON.stringify(req.trackingData)};
data.forEach(({experiment, result}) => {
// Example using Segment.io
analytics.track("Experiment Viewed", {
experimentId: experiment.key,
variationId: result.key,
});
})
})();
</script>
...
`);
})
If you have also integrated the GrowthBook SDK on your front-end, there's a helper method you can use instead of manually firing tracking calls.
gb.setDeferredTrackingCalls(data);
gb.fireDeferredTrackingCalls();
This will use the trackingCallback
configured on your front-end GrowthBook instance.
Sticky Bucketing
Sticky bucketing ensures that users see the same experiment variant, even when user session, user login status, or experiment parameters change. See the Sticky Bucketing docs for more information. If your organization and experiment supports sticky bucketing, you must implement an instance of the StickyBucketService
to use Sticky Bucketing. The JS SDK exports several implementations of this service for common use cases, or you may build your own:
-
ExpressCookieStickyBucketService
— For NodeJS/Express controller-level bucket persistence using browser cookies; intended to be interoperable withBrowserCookieStickyBucketService
. Assumescookie-parser
is implemented (can be polyfilled). Cookie attributes can also be configured. The default cookie expiry is 180 days; override by passingmaxAge: {ms}
into the constructor'scookieAttributes
. -
RedisStickyBucketService
— For NodeJS Redis-based bucket persistence. Requires anioredis
Redis client instance to be passed in. -
Build your own — Implement the abstract
StickyBucketService
class and connect to your own data store, or custom wrap multiple service implementations (ex: read/write to both cookies and Redis).
Implementing most StickyBucketService implementations is straightforward and works with minimal setup. For instance, to use the ExpressCookieStickyBucketService
:
const { ExpressCookieStickyBucketService } = require("@growthbook/growthbook");
app.use((req, res, next) => {
const stickyBucketService = new ExpressCookieStickyBucketService({
req,
res
});
const userContext = await gbClient.applyStickyBuckets({
attributes: {
id: req.user.id
}
}, stickyBucketService);
req.growthbook = gbClient.getScopedInstance(userContext);
next();
});
TypeScript
When used in a TypeScript project, GrowthBook includes basic type inference out of the box:
// Type will be `string` based on the fallback provided ("blue")
const color = gbClient.getFeatureValue("button-color", "blue", userContext);
// You can manually specify types as well
// feature.value will be type `number`
const feature = gbClient.evalFeature<number>("font-size", userContext);
console.log(feature.value);
Strict Typing
If you want to enforce stricter types in your application, you can do that when creating the GrowthBook instance:
// Define all your feature flags and types here
interface AppFeatures {
"button-color": string;
"font-size": number;
"newForm": boolean;
}
// Pass into the GrowthBook instance
const gbClient = new GrowthBookClient<AppFeatures>({
...
});
Now, all feature flag methods will be strictly typed.
// feature.value will be type `number`
const feature = gbClient.evalFeature("font-size", userContext);
console.log(feature.value);
// Typos will cause compile-time errors
gbClient.isOn("buton-color", userContext); // "buton" instead of "button"
Instead of defining the AppFeatures
interface manually like above, you can auto-generate it from your GrowthBook account using the GrowthBook CLI.
Updating
As a general philosophy, we aim to keep the SDK 100% backwards compatible at all times. View the Changelog for a complete list of all SDK changes.
GrowthBook Instance (reference)
Attributes
You can specify attributes about the current user and request. These are used for two things:
- Feature targeting (e.g. paid users get one value, free users get another)
- Assigning persistent variations in A/B tests (e.g. user id "123" always gets variation B)
The following are some commonly used attributes, but use whatever makes sense for your application.
const userContext = {
attributes: {
id: "123",
loggedIn: true,
deviceId: "abc123def456",
company: "acme",
paid: false,
url: "/pricing",
browser: "chrome",
mobile: false,
country: "US",
},
};
Global Attributes
Sometimes there are global attributes that apply to all users. For example, the ip of your server. These can be specified on the GrowthBookClient instance and will be merged with any attributes in the user context.
const gbClient = new GrowthBookClient({
globalAttributes: {
serverIp: "10.1.1.1"
}
})
Secure Attributes
When secure attribute hashing is enabled, all targeting conditions in the SDK payload referencing attributes with datatype secureString
or secureString[]
will be anonymized via SHA-256 hashing. This allows you to safely target users based on sensitive attributes. You must enable this feature in your SDK Connection for it to take effect.
If your SDK Connection has secure attribute hashing enabled, you will need to manually hash any secureString
or secureString[]
attributes that you pass into the GrowthBook SDK.
To hash an attribute, use a cryptographic library with SHA-256 support, and compute the SHA-256 hashed value of your attribute plus your organization's secure attribute salt.
First, define a sha256 function using Node's built-in crypto support.
function sha256(str) {
return crypto.createHash("sha256").update(str).digest("hex");
}
Then, you can use this when setting attributes.
const salt = "f09jq3fij"; // Your organization's secure attribute salt (see Organization Settings)
// hashing a secureString attribute
const userEmail = sha256(salt + user.email);
// hashing an secureString[] attribute
const userTags = user.tags.map(tag => sha256(salt + tag));
const userContext = {
attributes: {
id: user.id,
loggedIn: true,
email: userEmail,
tags: userTags,
}
}
Feature Usage Callback
GrowthBook can fire a callback whenever a feature is evaluated for a user. This can be useful to update 3rd party tools like NewRelic or DataDog.
Like with the trackingCallback
, this can be defined on either the GrowthBookClient instance or as part of the user context. When defined on the GrowthBookClient instance, a 3rd argument with the user context will be sent so you can identify which user evaluated a feature.
// Callback configured globally
const gbClient = new GrowthBookClient({
onFeatureUsage: (featureKey, result, userContext) => {
console.log(`${featureKey}=${result.value} for user ${userContext.attributes.id}`);
}
});
// Callback configured on the userContext
const userContext ={
onFeatureUsage: (featureKey, result) => {
console.log(`${featureKey}=${result.value}`);
},
};
The result
argument is the same thing returned from evalFeature
.
Note: If you evaluate the same feature multiple times (and the value doesn't change), the callback will only be fired the first time.
evalFeature
In addition to the isOn
and getFeatureValue
helper methods, there is the evalFeature
method that gives you more detailed information about why the value was assigned to the user.
// Get detailed information about the feature evaluation
const result = gbClient.evalFeature("my-feature", userContext);
// The value of the feature (or `null` if not defined)
console.log(result.value);
// Why the value was assigned to the user
// One of: `override`, `unknownFeature`, `defaultValue`, `force`, or `experiment`
console.log(result.source);
// The string id of the rule (if any) which was used
console.log(result.ruleId);
// Information about the experiment (if any) which was used
console.log(result.experiment);
// The result of the experiment (or `undefined`)
console.log(result.experimentResult);
Inline Experiments
Instead of declaring all features up-front in the payload and referencing them by ids in your code, you can also just run an experiment directly. This is done with the runInlineExperiment
method:
// These are the only required options
const { value } = gbClient.runInlineExperiment({
key: "my-experiment",
variations: ["red", "blue", "green"],
}, userContext);
Customizing the Traffic Split
By default, this will include all traffic and do an even split between all variations. There are 2 ways to customize this behavior:
// Option 1: Using weights and coverage
gbClient.runInlineExperiment({
key: "my-experiment",
variations: ["red", "blue", "green"],
// Only include 10% of traffic
coverage: 0.1,
// Split the included traffic 50/25/25 instead of the default 33/33/33
weights: [0.5, 0.25, 0.25],
}, userContext);
// Option 2: Specifying ranges
gbClient.runInlineExperiment({
key: "my-experiment",
variations: ["red", "blue", "green"],
// Identical to the above
// 5% of traffic in A, 2.5% each in B and C
ranges: [
[0, 0.05],
[0.5, 0.525],
[0.75, 0.775],
],
}, userContext);
Hashing
We use deterministic hashing to assign a variation to a user. We hash together the user's id and experiment key, which produces a number between 0
and 1
. Each variation is assigned a range of numbers, and whichever one the user's hash value falls into will be assigned.
You can customize this hashing behavior:
gbClient.runInlineExperiment({
key: "my-experiment",
variations: ["A", "B"],
// Which hashing algorithm to use
// Version 2 is the latest and the one we recommend
hashVersion: 2,
// Use a different seed instead of the experiment key
seed: "abcdef123456",
// Use a different user attribute (default is `id`)
hashAttribute: "device_id",
}, userContext);
Note: For backwards compatibility, if no hashVersion
is specified, it will fall back to using version 1
, which is deprecated. In the future, version 2
will become the default. We recommend specifying version 2
now for all new experiments to avoid migration issues down the line.
Meta Info
You can also define meta info for the experiment and/or variations. These do not affect the behavior, but they are passed through to the trackingCallback
, so they can be used to annotate events.
gbClient.runInlineExperiment({
key: "results-per-page",
variations: [10, 20],
// Experiment meta info
name: "Results per Page",
phase: "full-traffic"
// Variation meta info
meta: [
{
key: "control",
name: "10 Results per Page",
},
{
key: "variation",
name: "20 Results per Page",
},
]
}, userContext)
Mutual Exclusion
Sometimes you want to run multiple conflicting experiments at the same time. You can use the filters
setting to run mutually exclusive experiments.
We do this using deterministic hashing to assign users a value between 0 and 1 for each filter.
// Will include 60% of users - ones with a hash between 0 and 0.6
gbClient.runInlineExperiment({
key: "experiment-1",
variation: [0, 1],
filters: [
{
seed: "pricing",
attribute: "id",
ranges: [[0, 0.6]]
}
]
}, userContext);
// Will include the other 40% of users - ones with a hash between 0.6 and 1
gbClient.runInlineExperiment({
key: "experiment-2",
variation: [0, 1],
filters: [
{
seed: "pricing",
attribute: "id",
ranges: [[0.6, 1.0]]
}
]
}, userContext);
Holdout Groups
To use global holdout groups, use a nested experiment design:
// The value will be `true` if in the holdout group, otherwise `false`
const holdout = gbClient.runInlineExperiment({
key: "holdout",
variations: [true, false],
// 10% of users in the holdout group
weights: [0.1, 0.9]
}, userContext);
// Only run your main experiment if the user is NOT in the holdout
if (!holdout.value) {
const res = gbClient.runInlineExperiment({
key: "my-experiment",
variations: ["A", "B"]
}, userContext)
}
Targeting Conditions
You can also define targeting conditions that limit which users are included in the experiment. These conditions are evaluated against the attributes
passed into the user context. The syntax for conditions is based on the MongoDB query syntax and is straightforward to read and write.
For example, if the attributes are:
{
"id": "123",
"browser": {
"vendor": "firefox",
"version": 94
},
"country": "CA"
}
The following condition would evaluate to true
and the user would be included in the experiment:
gbClient.runInlineExperiment({
key: "my-experiment",
variation: [0, 1],
condition: {
"browser.vendor": "firefox",
"country": {
"$in": ["US", "CA", "IN"]
}
}
}, userContext)
Inline Experiment Return Value
A call to runInlineExperiment
returns an object with a few useful properties:
const {
value,
key,
name,
variationId,
inExperiment,
hashUsed,
hashAttribute,
hashValue,
} = gbClient.runInlineExperiment({
key: "my-experiment",
variations: ["A", "B"],
}, userContext);
// If user is included in the experiment
console.log(inExperiment); // true or false
// The index of the assigned variation
console.log(variationId); // 0 or 1
// The value of the assigned variation
console.log(value); // "A" or "B"
// The key and name of the assigned variation (if specified in `meta`)
console.log(key); // "0" or "1"
console.log(name); // ""
// If the variation was randomly assigned by hashing
console.log(hashUsed); // true or false
// The user attribute that was hashed
console.log(hashAttribute); // "id"
// The value of that attribute
console.log(hashValue); // e.g. "123"
The inExperiment
flag will be false if the user was excluded from being part of the experiment for any reason (e.g. failed targeting conditions).
The hashUsed
flag will only be true if the user was randomly assigned a variation. If the user was forced into a specific variation instead, this flag will be false.