With A/B Testing you can split traffic randomly into different groups and show each group variations of a message. This is done by creating multiple experiences, then assigning each a percentage of traffic.
This article details how to use A/B Testing to split traffic randomly into different groups and show each group variations of a message.
A/B testing, in it's true definition, is a form of statistical hypothesis testing with two variants: test and control. Within Evergage, we use A/B testing to more broadly define any campaign that uses a randomized split in the traffic to test a hypothesis. There are several things you should understand before creating an A/B test:
For a better understanding of how you can design a campaign such that the results are statistically sound and do not contain any biases, please contact your Customer Success representative for guidance.
How long you run a test should be driven by your data rate and the size of the effect you're looking for. You may want to consider using a sample size calculator when planning your test. There are many available online including: http://www.evanmiller.org/ab-testing/sample-size.html. This will give you an idea of how much data you need for a given test, and that in turn will give you a sense of how long to run the experiment. In classical statistics, you do this in advance, and check your result once (and only once) when you have the required data; at that time you declare whether any difference you see is real or due to chance.