Measure audience splits
Audience: Marketers and analysts who want to compare the effectiveness of different campaign strategies using real audience data.
Understand the impact of your marketing efforts by measuring performance across randomized audience groups.
What you can measure
-
Incrementality: Did your campaign actually cause the desired outcome?
-
A/B/n testing: Compare different creatives, messages, or channels.
-
Personalization impact: See if personalized experiences outperform generic ones.
-
Channel effectiveness: Identify which channels drive more downstream conversions.
When to use splits
Use audience splits when you want to move beyond assumptions like:
-
Correlation: “Purchases increased after my campaign—but did the campaign cause it?”
-
Attribution: “A user converted after seeing my email—but would they have converted anyway?”
Running a split test helps you isolate the effect of your campaign by comparing outcomes between randomized groups. This gives you a clearer view of what’s actually driving results.
How audience experiments work
Stage | What it means | Hightouch feature |
---|---|---|
1. Define your audience | Choose your target group | Build audiences |
2. Split into groups | Randomly assign audience members | Splits |
3. Apply a treatment | Run a campaign for one group | Syncs |
4. Measure results | Compare outcomes between groups | Charts (Splits tab) |
5. Interpret the impact | Evaluate statistical significance and lift | Splits Measurement |
Requirements
Before you can measure a split test:
-
Audience snapshots must be enabled. Snapshots record audience membership at the time of sync.
-
Split groups must be created.
-
The audience must be synced at least once after the split is created.
-
Each group must generate post-snapshot event data (e.g., purchases, clicks).
Setup steps
1. Split your audience
-
Open your audience in Customer Studio.
-
Go to the
Splits
tab. -
Toggle Enable split groups.
-
Under Split audience into groups, define your split:
-
Set the number of groups (e.g., 2 for A/B, 3+ for A/B/n).
-
Adjust the percentage distribution between groups (e.g., 50/50).
-
Rename groups as needed (e.g., "Holdout", "Treatment A").
-
Optionally add more groups for A/B/n testing.
-
For each group, configure syncs to destinations.
- For example: send the treatment group to Meta Ads and the holdout group to a suppression list.
- Click View Results to begin analysis.
3. Measure results
The Splits chart helps you compare outcomes between audience groups to evaluate the impact of your campaign.
1. Configure chart
In the Charts > Splits
view:
- Under Measuring, select the audience you want to evaluate. A tag shows how many split groups exist.
- Use the Measurement window dropdown to define when Hightouch should look for events.
- Example: “Entry → 30 days after entry”
- Choose a Metric (e.g., Conversions, Revenue). Learn how to create a new metric.
- Optional: Use the Filter by dropdown to refine by user properties or events.
The chart will display performance over time, along with summary statistics.
2. Interpret results
-
Solid line: Observed average performance over time.
-
Shaded region: 95% confidence interval for each group.
-
Lift summary bar (top-right):
-
Green = Significant positive impact
-
Red = Significant negative impact
-
Gray = Not statistically significant
-
If the confidence bar overlaps 0%, the result is inconclusive.
3. Normalize results
Toggle between:
-
Per member (default): Shows average performance per user.
-
To holdout group size: Scales results for an even comparison.
You can toggle normalization in the top-right corner. Hovering over the lift card shows raw totals.
Example use cases
Email campaign holdout
Goal: Test whether sending an email increases purchases.
-
Split audience 50/50
-
Send promo to Group A, hold out Group B
-
Measure conversions or revenue per user
Ad creative A/B test
Goal: See which ad performs better.
-
Split audience into two groups
-
Sync to separate ad sets in Meta or Google
-
Compare CTR or conversion rates
Personalization test
Goal: Measure the impact of tailored content.
-
Group A: Personalized product recs
-
Group B: Generic message
-
Measure engagement, clicks, or sales
Lifecycle messaging experiment
Goal: Improve onboarding outcomes.
-
Split new users at signup
-
Sync to two onboarding flows
-
Measure feature activation or retention after 14 days
Channel comparison
Goal: See which marketing channel performs best.
-
Sync Group A to email, Group B to paid ads
-
Measure downstream conversion or LTV