Skip to main content

View performance analytics in Sitecore Personalize

Abstract

Provides details on the performance analytics dashboard that you use when running experiments (SItecore Personalize).

When your experiment has been live for 24 hours, you can start viewing how the variants are performing against each other on the Performance screen.

To view performance analytics:

  1. On the navigation pane, click Experiments, Web to see performance analytics for a web experiment. Alternatively, click Experiments, Full Stack to see performance analytics for an interactive experiment. A list of experiments displays.

  2. Click the name of the experiment. The Build screen displays.

  3. Click the Performance menu item. The Performance screen displays.

The following is a partial image of the upper part of the Performance screen. See the table for corresponding descriptions to the numbered screen items.

Upper part of Performance screen

The following table contains a description of each numbered screen element on the upper part of the Performance screen.

Screen item

Description

1. Name

The proper name of the experiment you are viewing.

2. Tag

Click to apply a tag to the experiment. This is optional.

3. Status

The current status of the experiment. The current status affects the Status change button, as detailed in the row below.

4. Status change button

Enables you to change the current status of the experiment.

5. Preview or Test button

Click Preview to test and preview a web experiment. The Test button only displays for interactive experiments. To test and configure the API response for an Interactive experiment, click Test .

6. Vertical Ellipsis

Click to duplicate, delete, or view the experiment history of the experiment.

7. Build

Click to view the configuration of the experiment.

8. Performance

The Performance screen is selected by default when you open an experiment with a live. status.

9. Operational

Click to view operational metrics for the experiment.

10. Details

Click to view details of the experiment including when it was created and who created it.

11. Schedule

The date and timestamp of when the experiment started running, if it is a triggered experiment that is scheduled to run, and the end date, if applicable.

12. Winner or Inconclusive status banner

Displays whether the experiment has a winner variant or is inconclusive. If the test has a winner, it displays the name of the winning variant. The experiment must have reached statistical significance for a winner to be declared. The winning variant is measured against the primary goal. If the test is Inconclusive, the banner displays that the test is inconclusive but it continues to run.

13. Started

The date the experiment started.

14. Winning Variant bar chart

You can see which variant has won or is leading, based on performance against the primary goal. The primary goal is the metric that the variants are measured against. You can also compare the performance metrics of all the variants in the bar chart.

15. Current Unique Sessions

The number of current unique sessions. It is important to know how many current sessions there have been and compare it to the number of required sessions to help you gauge how much longer it will take to reach statistical significance.

16. Required Sessions

The number of sessions required to meet statistical significance. You can compare the difference between the Current Unique Sessions value to gauge how much longer it'll take to reach statistical significance.

17. All Goals table

View the performance of all your variants against all goals. A winning ribbon icon displays adjacent to the row of the winning variant.

18. Variant column

Contains a row for each variant in the experiment.

19. Sessions column

The number of current unique sessions.

20. Primary Goal column

The performance metric of each variant against the primary goal. The primary goal is the metric that the variants are measured against.

21. 2nd Goal column

The performance metric of each variant against a secondary goal. A secondary goal is not a metric that the variants are measured against when determining the winning variant.

22. 3rd Goal column

The performance metric of each variant against a tertiary goal. A tertiary goal is not a metric that the variants are measured against when determining the winning variant.

The following is a partial image of the lower part of the Performance screen. See the table below for corresponding descriptions to the numbered screen items.

Lower part of Performance screen

The following table contains a description of each numbered screen element of the lower part of the Performance screen.

Screen item

Description

1. Breakdown by Goal

This line graph lets you compare the performance of all variants against a goal. Click the goal whose metrics you want to see on the line graph, from the drop-down list.

2. Device

Click the Device you want to filter by. The line graph refreshes to only show metrics for the selected device. This is set to ALL by default.

3. Time

Select the time span by which you want to filter results. You can select Last 7 Days or Last 30 Days. This is set to ALL by default.

4. Value

The y axis displays the metric that the variants achieved, as measured against the selected goal.

5. Date

The x axis displays the date range that you want to view results for, based on what you selected from the Time filter.

6. Details

Contains metrics on all variants as measured against the goal you clicked from the Breakdown by Goal drop-down list.

7. Variant Name column

The name of each variant in the experiment.

8. Sessions column

The number of current unique sessions.

9. Selected Goal column

Contains the metrics of the goal you clicked from the Breakdown by Goal drop-down list.

10. Goal Metric column

Shows a metric related to the goal you clicked from the Breakdown by Goal drop-down list. In this example, the conversion goal shows the Uplift % column. The uplift percentage is the percentage difference in conversion rate between the control and variant.

11. Confidence Index (%)

A type of estimate calculated from the statistics of the selected goal. This estimate is only reliable when the minimum sample size is reached. The higher the confidence index, the higher the probability that the results of the experiment are accurate.