You are here:
Considerations for Sandbox in Data 360
Refer to these considerations when creating a sandbox and deploying changes from a sandbox in Data 360.
Licensing and Setup Requirements
To use Data 360 in a sandbox, provision Data Cloud licenses in production and create the Data 360 tenant. To create the tenant, from Setup, find and select Data Cloud Setup, and click Setup. Complete the tenant setup before Data 360 is available in the sandbox. If the production isn't operating Data 360 under a Salesforce Data Cloud license, you can't turn on Data 360 in sandbox.
Provisioning Data 360 in a Sandbox
To set up Data 360 in a sandbox that was created before Data 360 was provisioned in your production org, you can use the license match feature. This feature allows you to inherit Data Cloud licenses from the production org without refreshing the sandbox. However, there are a few important considerations to keep in mind:
- The production org must have Data 360 provisioned for the license match to work.
- The license match process copies only Data Cloud licenses. The license match doesn't copy organization preferences, values, metadata, or permissions. You must set up custom configurations, integrations, and other metadata manually in the sandbox.
- Complete Data 360 provisioning in your sandbox first. Confirm that provisioning is active before you activate Agentforce or Einstein features.
To enable Data 360 in a sandbox org, the source production org that the sandbox is created from must be operating Data Cloud under a Data Cloud license. If the source production org is operating Data Cloud under a Salesforce Data Cloud license, you can't turn on Data 360 in the sandbox org.
Data 360 Sandbox Storage and Limits
- Data 360 sandboxes follow production limits and guidelines. They don't have the storage restrictions of Salesforce sandboxes, such as the 200-MB limit for Developer sandboxes. For limits related to Data 360, see Data 360 Limits and Guidelines.
Considerations When Creating a Sandbox
- If you have a Data 360 component in your production org that was added from a developer-controlled managed package, the component is copied over to the sandbox but you can't edit it or change the status. In this case, re-create the component in the sandbox.
- When some Data 360 components that depend on data ingestion are copied to the sandbox, record count and date fields reflect the values from production. You see these production values until data is ingested in the sandbox. You could see this behavior in data streams, data lake objects, segments, and identity resolution.
- You can't use a sandbox template when you create a Data 360 sandbox.
- After a data lake object (DLO) is replicated in a sandbox, you can’t delete it there. You can delete the DLO only from production.
- Unified Messaging is supported in a sandbox that's created after May 9, 2025.
- The Data 360 connectors for Einstein Conversation Insights features are not supported in sandbox orgs. Einstein Conversation Insights features that rely on Data 360, such as Sales Signals, are not currently testable in sandbox.
- You can't create a Data 360 sandbox from a cloned sandbox.
- Digital Wallet billing streams are replicated to the sandbox org. The streams are read only in the production org and remain read only in the sandbox org.
Considerations When Deploying Changes from a Sandbox
Ready to move changes from a sandbox org to another sandbox or a production org. Here are the key things to keep in mind.
- Use change sets, Salesforce command line interface, or metadata API to deploy changes from one sandbox to another sandbox.
- If you create a data space in a sandbox that doesn’t exist in the production org, and you want to deploy components from that data space, you must first manually create the data space in production. The data space isn’t created when you deploy the data kit.
- You can deploy a calculated insight from the default data space and a custom data space.
- When you deploy standard and custom Data Model Objects (DMOs) from a sandbox, the associated relationships and tags are automatically added during the deployment process. However, tags that are explicitly assigned to other DMO types, such as derived DMOs or curated DMOs, aren't included in the deployment.
- If you create or modify a Marketing Cloud Engagement (MCE) connection, a business unit for ingestion, or a business unit mapping to a data space, you can’t include it in a data kit and deploy it back to the source org. In this case, create or modify these features in the source org.
- In a Marketing Cloud Engagement Enterprise Attributes bundle data stream in a sandbox,
you can perform these actions, but the resulting changes aren’t deployed back to production.
- Disable a field
- Modify a formula field
- Add a formula field
- You can deploy only a data action target with a home org connection back to production. Cross-org connections aren’t supported.
- If you build an Einstein Studio model in a sandbox, you can see the training metrics there. After you deploy the model, it’s usable in production, but the metrics aren’t visible.
- When you add a supported data stream, Data 360 includes the connector metadata. When you deploy the data stream for the first time, Data 360 deploys the connector in an inactive state. To begin data ingestion, configure credentials and activate the connection in the target org. For subsequent deployments, if an active connector exists in the target org, the deployment ignores changes to attributes. This change applies to all data stream connectors.
- If you remove the data stream, you must remove the connector metadata from the DevOps data kit.
Considerations When Deploying Changes Across Orgs
- To move metadata across production orgs, deploy the data from production org one to sandbox org one. Then, deploy those changes from sandbox one to the sandbox linked to production org two before the final push into production org two.
Considerations When Using Segments in a Sandbox
- If you create an audience segment for Provisional Audience Limiting or Static Attribute in a sandbox, you can’t deploy the segment to production. You must manually re-create the segment in production.
- You can deploy a segment membership DMO that has no relationships from a sandbox to production. However, deploying a segment membership DMO with relationships isn't supported.
- When you deploy a new segment, its publish schedule defaults to Don’t Publish. If the segment already exists in production, its schedule isn’t affected.
Considerations When Using Data Graphs in a Sandbox
- Data graphs are replicated from production to sandbox but have a status of Processing. After the associated transform runs successfully, the data graph status changes to Active. To manually activate a data graph, refresh it on the data graph page.
- If you deploy a data transform or a data graph from a sandbox, and it doesn’t automatically run after it’s deployed in production, update its status from the list view.
Considerations When Using Activations and Activation Targets in a Sandbox
- You can create an activation in a sandbox, and you can deploy Amazon S3, Marketing Cloud Engagement, SFTP, GCS, or Microsoft Azure activations from sandbox to production.
- Before deploying an activation target from a sandbox to production, you must enable the corresponding connection for the activation target in the production environment.
- Activations and activation targets are copied from production to a sandbox, but you
can’t activate them in a sandbox until the underlying connector is
enabled.Note Activations copied to a sandbox retain their production segment and activation IDs. As a result, for a Marketing Cloud Engagement (MCE) activation target, they point to the same shared data extension key. To prevent clearing or overwriting production data, keep sandbox activation targets inactive or ensure they use a completely isolated Enterprise ID (EID) or destination during the authentication process.
- Data action targets, streaming data transforms, and batch data transforms are replicated from production to sandbox, but you must explicitly activate them.
- For Ecosystem Activation Targets, to enable the publish schedule, disable the platform and re-enable it.
- Loyalty activation targets copied from production to sandbox aren’t functional, and you can’t manually update the target status on the Data Cloud Activation Targets page.
- If you edit the title of a Marketing Cloud Engagement activation target, a new activation target is created instead of editing the existing target.
- When building a data kit, add and save your activations in small batches. Attempting to save a large number of activations simultaneously can cause the operation to time out and fail.
Considerations When Using Data Streams and Connections in a Sandbox
- If you create a connection in the sandbox and deploy that connection to production, you must reauthenticate it in the production org. After you configure the connection, the data flows into the sandbox.
- If you have a data stream that’s based on an S3 connection and that connection contains the org ID in its path, the org ID is changed from the production org ID to the sandbox org ID when you create a sandbox. If the connection path name changes, the data isn’t refreshed for those sandbox data streams. For example, if you have an S3 stream path of parentpath/productionOrgID/fileName.csv, the sandbox S3 stream path is parentpath/sandboxOrgID/fileName.csv. You must either edit the sandbox stream path and change the org ID to the production org ID or add the files to the new path.
- If you configure a connection in the sandbox and a data stream based on that connection continues to have a status of NEEDS_ACTIVATION, contact Salesforce Customer Support.
- After you create a sandbox, you must re-create connections based on the Amazon Kinesis connector.
Considerations When Using Zero Copy Data Stream in a Sandbox
- For Zero Copy data streams using Google Big Query, Snowflake, Databricks, or Redshift connections, make sure the database, schema, and table names are consistent between your production and sandbox environments.
- When you activate a Google Big Query connection in a sandbox org, you can modify the Google BQ project ID, service account email, and private key to match the sandbox org. However, you must maintain identical database, schema, and table names within the project ID.
- During Snowflake connection activation in a sandbox org, you can update the account URL, username, and private key for the sandbox. Make sure that the database, schema, and table names within the data warehouse remain the same.
- For Databricks connections in a sandbox, you can change the authentication details, connection URL, and HTTP path. Keep the database, schema, and table names identical to the production configuration.
- In a sandbox setup for a Redshift connection, you can change the authentication details, connection URL, and database. Keep the database, schema, and table names inside that database consistent with the production environment.
Considerations When Using Unstructured Data and Search Indexes in a Sandbox
- When the data source used for a search index is a CRM object (DMO):
- Data lake objects (DLOs), data model objects (DMOs), and the search index are copied from the production org to the sandbox.
- In the sandbox org, you need to activate the CRM home org connection and activate the CRM data streams. The search index runs automatically after data is ingested into the DLO.
- When the search index is created through the Agentforce Data Library (ADL):
- UDLOs, UDMOs, DLOs, DMOs, CRM data streams, and the search index are automatically created in the production org.
- UDLOs, UDMOs, DLOs, DMOs, CRM data streams, and the search index are copied from the production org to the sandbox.
- In the sandbox org, you need to activate the CRM home org connection and activate CRM data streams. You do not need to do anything for the UDLOs and UDMOs. The search index runs automatically after Knowledge data is ingested into the DLO. Any files that are uploaded to the ADL in the sandbox org are indexed automatically.
- When the data source for a search index is an external blob store (for example, S3, GCS,
or Azure):
- UDLOs, UDMOs, and the search index are copied from the production org to the sandbox.
- In the sandbox org, you must create an external client app or connected app and setup file notifications on your blob store and point these file notifications to the sandbox org.
- In the sandbox org, you need to recreate the UDLO and remap it to the UDMO. The search index runs automatically after data is ingested into the UDLO.

