Skip to main content
Users
CloudPortalLogin
  • Powered byPowered by
Introduction to Sitecore Personalize
Managing accounts and system settings
Identifying guests
Introduction to experiments
Introduction to experiences
Decisioning
View dashboards
Developer Center
Connecting to an external system
Using client-side JavaScript
Using server-side JavaScript
AI in Personalize
Glossary
  • Sitecore Personalize
  • Introduction to decisioning in Sitecore Personalize
  • Managing decision model variants in Sitecore Personalize
  • Troubleshooting a decision model variant

Troubleshooting a decision model variant

Sitecore Personalize offers a robust testing environment for decision model variants within the Test Canvas. It's designed to help you identify and troubleshoot errors, validate the logic of decision models, choose a specific guest, and edit the guest context to test the response of decision models.

We recommend you familiarize yourself with this topic as you test a decision model variant. The following are some strategies to optimize and troubleshoot your decision model variants using the Test Canvas.

Editing the guest context

The Request tab in the Test Canvas displays the context that the decision model uses when it runs. This can provide insights into the available data for decisioning, especially useful when creating decision tables or programmables that access data.

You can also manually edit the context before initiating a test on the canvas. This lets you test the decision model logic without the need to find or create a guest profile with the relevant attributes. For example, to evaluate the decision model's response to a guest's last visit date, you can simply edit the lastSeen attribute in the guest context, replace it with an earlier date, and check whether the decision model performs as expected.

The Request tab in the Test Canvas shows the guest context that the decision model uses.

Using the Executions and Logs tabs

When an error occurs, the Executions tab on the Test Canvas can help you pinpoint the specific node responsible for the error, providing you with a descriptive error message.

The Executions tab in the Test Canvas indicates which decision model node causes the error.

If you added a print() statement in a programmable in your decision model, you can also debug errors using the output values in the Logs tab. This tab captures data up until the point the code encounters an exception, simplifying the process of identifying the problematic code segment.

Recreating errors in live decision models

To help troubleshoot and resolve issues in a decision model that is already in use in a live experience, you have the option to replicate errors on the Test Canvas.

To do this, begin by locating the guest associated with the failed execution in the execution report. Then, duplicate the production variant of the decision model, and using this draft copy, proceed to loading the identified guest into the Test Canvas. This setup allows you to recreate the scenario and identify the root cause in a controlled testing environment.

Testing triggered experiences with entity data

To test a decision model designed for a triggered experience that uses entity data, you need to mock the entity data. You can do this on the Test Canvas on the Request tab, by adding either the reference attribute ref of the object or the whole object into the entity.

For example, in the following image, the ref of a recent guest session has been added to the entity object. As a result, the test simulates a situation where this specific session triggers the decision model, providing insight into how it would respond in real-time.

Testing a triggered experience by adding the reference of a guest session to the entity object.

Alternatively, to mimic the decision model's response to a custom event, you can directly insert the entire event data into the entity object, as shown in the next image. This method allows you to evaluate the decision model’s performance using more attributes.

Testing a triggered experience by adding the entire event data of a custom event to the entity object.

Testing interactive experiences with custom values

A decision model built for an interactive experience might rely on custom values provided in the request. You can test this kind of decision model by mocking the request in the Test Canvas. To do this, add a request object into the top level of the context object, and place custom values in the request within the params object.

Here's what the entire mocked context would look like:

Testing an interactive experience by adding the request object into the top level of the context object.

Do you have some feedback for us?

If you have suggestions for improving this article,

Privacy policySitecore Trust CenterCopyright © 1999-2026 Sitecore