Introduction to experiments in Sitecore Personalize
Experiments enable you to create and A/B/n test personalized content and offers to a guest based on real-time behavior, propensity scores, product recommenders, and more.
We recommend you run an experiment anytime you want to statistically test that a customer experience is resulting in the best outcome. You can test your variant against a control or compare multiple variants in an A/B/n test. For example, you might want to test several new website designs to find out which one generates the most engagement and conversions. After a variant winner is declared, you can create an experience to apply the winner as the default customer experience.
Types of experiments
You can create the following types of experiments:
-
Web experiment - enables marketers to create dynamic offers and content using web templates that display a form for the marketer to complete. Developers can also create web experiments with dynamic data using HTML, CSS, and JavaScript. Web experiments can run on the web or be deployed into a web-based application.
-
Interactive experiment - enables developers to create dynamic offers and content from back-end systems for maximum server-side personalization. Interactive experiments can run on the web or be deployed into a web-based application.
-
Triggered experiment - enables marketers who are familiar with FreeMarker to create dynamic offers and content to send to an Email Service Provider (ESP) or SMS/Push service provider for distribution.
Statistical engine
Sitecore Personalize offers two statistical engine for running experiments. You can choose between Classic and Optimized. Use the default Classic when you have a clear hypothesis and aim to A/B/n test a limited number of variants until statistical significance is reached. For faster results for web or interactive experiments, use optimized testing to employ the multi-armed bandit algorithm. This option will automatically allocate 100% of the traffic to your experiment, and dynamically assign guests to the highest performing variants.
Testing considerations
When designing an experiment, it might be tempting to test multiple variants or elements simultaneously, such as a button's color, size, and background image. However, testing too many combinations at once increases complexity and extends the time required to achieve statistical significance.
Key considerations:
-
Keep experiments manageable - best practice is to test one variant against the control at a time. While testing multiple variants in a single experiment is possible, it requires a more advanced design and significantly prolongs the duration of the experiment.
-
Test one variable at a time - changing only one element per experiment ensures that improvements or declines in performance can be clearly attributed to that specific change. Testing multiple variables simultaneously can make it harder to determine which caused the impact. By isolating variables and testing them individually, you'll obtain more reliable results.
-
A/B/n testing vs. multivariate testing - if your organization is new to A/B/n testing, we recommend starting with simple A/B/n tests before moving to more complex multivariant tests.
-
False positive rate increases with more variants - sample size calculations typically assume a 5% false positive rate (95% confidence level) for two variants. As more variants are introduced, the likelihood of false positives increases.
-
Example: If your experiment has four variants plus one control, the actual false positive rate rises to approximately 18% instead of the expected 5%. This means nearly 1 in 5 chance that the results will falsely indicate a winning variant, even when no real difference exists.
To mitigate this, techniques like the Bonferroni Correction can be used.
-