Page tree
Skip to end of metadata
Go to start of metadata

New Name: Marketing Cloud Personalization

Interaction Studio (formerly Evergage) is now known as Marketing Cloud Personalization. The new name reflects our mission and vision for innovation in Salesforce Marketing Cloud. We wish we could snap our fingers to update the name everywhere, but you can expect to see the previous name in various places until we replace it.

Interaction Studio’s Campaign Statistics System makes it easy for you to scientifically measure the impact of your campaigns. 

This Article Explains

This article details how to use Interaction Studio's Campaign Statistics System to measure the impact of your campaigns.

How is impact measured?

The objective for any A/B test or personalization campaign is to generate lift over control for a business goal.

  • Lift is a statistically significant improvement in the measured business goal
  • Control is the default experience without the change being tested or the personalization being applied
  • The business goal is whatever you choose to measure (and are able to measure). For example, it could be a clickthrough, a signup, the amount of time on a page or site, a purchase, average order value, revenue per user, or something else to be measured.  

How is lift measured?

Example 1

Lift is calculated by looking at the percentage increase of the goal value after running the campaign or:

[(Goal value for campaign) – (Goal value for control)] / (Goal value for control)]

Using the numbers from Example 1 at the right, this would be calculated as:

[ 84.69 - 77.13 ] / [ 77.13 ] = 9.8%

  

  

Example 1

 

The goal of this personalization campaign is to increase revenue/user. The control for this campaign is an experience without any recommendations. When the control is run, revenue per user is $77.13.  When the personalization campaign is run, revenue per user is $84.69.  

Example 2

Example 2 shows an A/B test campaign. 

The Experience appears to be generating 34.9% lift, but with 0% confidence and inconclusive results, that may not be the case. Why is the confidence 0% and what does that mean? Learn more about statistical confidence.

Lift is calculated by looking at the percentage increase of the goal value after running the campaign or:

[(Goal value for campaign) – (Goal value for control)] / (Goal value for control)]

Using the numbers from Example 2 at the right, this would be calculated as:

[1.36- 1.01] / [1.01] = 34.9%


Example 2

The goal of this personalization campaign could be anything (a clickthrough, joining a segment of interest, signing up, spending a certain amount of time on the site). When the control is run, visitors achieve the goal at a rate of 1.01%. When the new Experience 1 is shown, visitors achieve the goal at a rate of 1.36%. 


How does Interaction Studio calculate attribution for goal completion?

What counts as a goal completion? What counts in the revenue/user, clickthrough, signup, segment membership, average order value calculations? Interaction Studio only calculates results as part of the analysis if a visitor meets the following criteria:

  1. Qualification–the visitor must have qualified to see the campaign, regardless of whether they are in the test group and see the campaign or are in the control group and do not
  2. Completion–the visitor achieved the campaign goal after having seen (or qualified to have seen) the campaign. Similarly, attributed revenue per user is counted for visitors who have seen the campaign and made a purchase 
  3. Time Range–Interaction Studio attribution only considers activity inside the time frame selected. For a purchase to be attributed, both the Impressed Visit (IV) and the goal event (think of purchase, click or goal achievement) need to happen within the timeframe selected. For more information, please refer to the article on Campaign Statistics – Attribution


Why can campaign statistics differ between the Campaign List Screen and the Campaign Stats Screen?

If you see differences between the campaign list screen and the campaign stats screen for the same period, it's because the statistics are stored differently. The campaign list screen is designed to load quickly and not aggregate total impressions, clicks, goal completions, and other data. And it uses a different counting system to fetch campaign statistics data. The campaign stats screen loads slower, but presents up-to-the-minute campaign data. The data from the campaign statistics screen is your source of truth, despite any differences.


Why do I see a difference between campaign statistics from Interaction Studio and the reporting data from my other analytics provider?

Don't expect Interaction Studio campaign statistics to exactly match the data from your external analytics provider, such as Google Analytics. Many data points contribute to how Interaction Studio tracks and records campaign statistics, such as configuration, timing, the definition of a visit, and the definition of a unique user. Reporting for the Interaction Studio can differ from the way analytics providers track the same information, so it’s difficult to pinpoint a single reason for the mismatch. For more information about the Interaction Studio approach to data tracking and reporting, see also: Reports and Analytics and the developer doc Campaign Stats Tracking.


Why is "confidence" needed?

Lift tells us how much better the campaign is doing than control for a goal of interest. Confidence tells us how sure we are of that lift.  Why do we need confidence?

  1. Don’t make big conclusions from small amounts of data. In Example 2 above, Experience 1 is doing 35% better than Control. But, if you look at the Goal Completions column (right), only 33 people’s actions are contributing to that Goal Completion Rate, which is not enough to be statistically significant
  2. The patterns matter. As explained in Campaign Statistics – Confidence, the patterns in the data can make you more or less sure of the lift result






How does Interaction Studio calculate "confidence"?

Interaction Studio uses Bayesian analysis to continuously calculate and update the confidence we have in every lift percentage. By default, if Bayesian confidence calculation is:

  • Greater than 95%–Interaction Studio displays the % of confidence and whether the campaign is winning or losing vs the control
  • Less than 95%–Interaction Studio still shows the lift, but displays that results are inconclusive

However, Interaction Studio can configure reporting to have a different threshold for confidence. Please contact your Customer Success representative for guidance.

For more information about interpreting campaign statistics data please refer to the Campaign Statistics Overview article.





Examples of how statistical confidence is displayed:

Recommended Optimizations Based on Key Performance Indicators

Every campaign and use case has its own nuances around how to optimize and improve campaign performance. However, there are some general guidelines for how to think about optimizing campaigns.

Key Performance Indicator (KPI)What You See in Campaign StatisticsRecommended Optimizations
Clickthrough Rate (CTR)Control is beating test clickthrough rate

Consider changing one of the following campaign components.

  1. Messaging – make the messaging more engaging
  2. Look and feel – create a more prominent call-to-action
  3. Recommendations – review the recipe for potential optimizations
Conversion Rate

Control is beating test conversion rate

Note: Conversion rate could also be goal-completion rate in many cases.

Consider changing one of the following campaign components, or pausing the campaign and revisiting your hypothesis.

  1. Placement – is the campaign distracting the visitor from converting? Consider a different placement in a less distracting location.
  2. Customer journey – is the campaign disrupting the customer journey? Consider re-evaluating your hypothesis and changing when the campaign triggers.
  3. Recommendations – is the recipe recommending items that are distracting the visitor from their path to conversion? 
Average Order Value (AOV)Control is beating test average order value

Consider changing one of the following campaign components:

  1. Offers or promotions – is the campaign displaying an offer that would lower the total purchase price? Are you seeing a correlated increase in conversion rate?
  2. Recommendations – is the campaign recommending lower-priced products? Consider adjusting the recipe, possibly by adding a price inclusion rule
Revenue Per User (RPU)Control is beating test revenue per user

Revenue per user is a combination metric derived from average order value and conversion rate. If the test experience is losing in revenue per user, consider other optimizations listed in this chart. 

In addition, evaluate whether you have selected the optimal the audience for your campaign. Consider the following examples:

  1. An upsell-based campaign may be less effective for first-time purchasers – they may be turned off by your upsell attempts, while returning customers might be more receptive. Consider limiting the upsell campaign to returning purchasers only.
  2. An offer-based campaign for returning customers may be unnecessary – in fact, you may be training your customers to wait for offers before making a purchase. Consider limiting this campaign to new visitors.
Impression VolumeImpression volume is below expectations

Review the campaign and experience targeting rules.

  1. Is the audience for this campaign too narrow? Consider relaxing the targeting rules.
  2. Are the actions targeted in the campaign tracking properly? Test the targeted actions yourself across browsers and devices, as well as in an incognito window, to confirm they are working properly. For web campaigns, use the Campaign Debugger feature in the Evergage Launcher Chrome Extension to validate you are seeing the action tracked.
  3. Does the campaign target a page element that loads slowly? This may cause the campaign not to render for users with poor Internet connections. Work with your developer to confirm the web template is properly built and performant.
Bounce RateTest is causing higher bounce rate or shorter visits (without conversion) than control

Your campaign may be distracting or annoying to your visitors.

  1. Review the campaign configuration and rules to confirm there's no unintended errors in campaign setup. For example, a popup that displays on every page load of a visit (without any frequency cap) would be annoying. A "best sellers" recommendations campaign accidentally targeting the checkout page could also be annoying or distracting.
  2. Review your hypothesis carefully, and determine if the campaign could be annoying to your visitors. The messaging may be frustrating, or the campaign could be off-putting somehow.
  3. Review the campaign loading performance to ensure it loads quickly and does not flicker if it is replacing existing content. Slow-loading campaigns can frustrate visitors. If your campaign is not loading with the rest of the page, work with your developer to improve the template.