You are here:
Customer Data Platform Limits and Guidelines
Guidelines define best practice recommendations to optimize the adoption of Data 360 by Customer Data Platform customers for best performance. Limits are boundaries beyond which features are unavailable, performance is throttled, or usage billing charges are applied. This article is about the legacy Customer Data Platform license, which is no longer available for purchase.
Some limits and guidelines vary by edition or org type, such as when using Data 360 in a Developer org.
For billing information, refer to this additional document if your Data 360 org is operating under a Customer Data Platform license.
For a list of documents related to other licenses, see this document instead.
Contact your account executive if you aren’t sure which documents apply to you.
General Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Location of data storage |
|
||
| Number of anonymous profiles |
|
For example, if you have 100 known profiles in your account, you can have 500 anonymous profiles. | |
| Number of Data 360 permission set licenses |
|
|
Learn about Data 360 Standard Permission Sets, Permission Set Considerations, and Salesforce Features and Edition Allocations. |
| Number of data spaces |
|
Learn more about licenses in Data 360 Standard Editions and Licenses. In developer orgs, the default data space is the only data space. |
|
| Data Spaces permission sets |
|
|
Data spaces permission sets can’t be included in permission set groups. Data spaces permissions sets aren’t counted against the Number of Data 360 Permission Set Licenses limit. |
| Unsupported features |
|
|
|
| Unsupported orgs |
|
|
|
| Data Cloud One connections | Not available |
|
Activation Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Scheduled activation publish frequency |
|
Learn more in Review Publish History of Your Activation. | |
| Total number of attributes |
|
Learn more about attributes in Create Activation for a Segment. | |
| Number of segments to an activation target |
|
|
Each segment can be sent to an activation target one time. Learn more in Create Activation for a Segment. |
| Number of activation targets total |
|
An activation target is information stored for a given activation platform, which is the location that a segment’s data is being sent to during the activation process. | |
| Total number of activations |
|
For example, if you purchased 100 segments, you can create up to 400 activations. | |
| Total number of activations with related attributes |
|
|
Use up to 100 activations with related attributes. After reaching that limit, choose to delete related attributes from existing activations or create activations with only direct attributes. |
| Total number of related attributes per activation |
|
|
Learn more in Considerations for Selecting Related Attributes in Data 360 Activations. |
| Maximum records in a segment with related attributes |
|
|
If the segment has more than 10 million records and contains related attributes when you publish, the segment doesn’t activate. Learn more in Considerations for Selecting Related Attributes in Data 360 Activations. |
| Maximum levels away from the Activation Membership DMO with related attributes |
|
|
Choose attributes from DMOs 5 levels away from your selected Activation Membership DMO. Learn more in Considerations for Selecting Related Attributes in Data 360 Activations. |
| Maximum number of related attribute records selected |
|
|
Select the number of records sent for a given attribute in your Activation payload. |
| Number of ad audience activations to ad audience partners |
|
|
Additional add audience activations are available for purchase. The ad audience partners are Google Ads, Meta (Facebook), and Amazon Ads. One ad audience allows a customer to activate one segment to an ad audience partner. For example, sending one segment to a Meta account and one segment to a Google account equals two ad audiences. If a customer purchases 10 additional ad audience activations, then the total number will be 13 since it comes with 3 and 10 were purchased. |
| Number of activation records processed per org per year for each ad audience |
|
|
For example, customers with three ad audiences can consume up to 255 billion activation records processed in a 12-month period. |
| Maximum number of characters for file storage activation |
|
|
Calculated Insights Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| How often calculated insights are processed |
|
Insights are processed one time a day to multiple times a day depending on the volume of data and complexity of queries. Calculated insights don't run if there are no changes to source data, object mappings, or configurations, or if you’ve already reached the per-insight oo per-org process limit. |
|
| Maximum number of times a calculated insight can be manually processed in any 24-hour period |
|
|
|
| Maximum calculated insights execution time |
|
Insights that run for longer than 2 hours may be terminated by the system. If a calculated insight runs for longer than two hours, review your data shape and optimize it. |
|
| Maximum number of nested calculated insights |
|
|
Refers to how many existing insights you can use within a new insight. |
| Maximum number of dimensions per calculated insight |
|
|
A dimension contains a qualitative value, such as a product name, date, or profile ID. |
| Maximum number of measures per calculated insight |
|
|
A measure is a quantitative aggregated value, such as an average or total amount. |
| Total number of calculated insights per tenant |
|
|
A tenant is a specific org or instance in Salesforce. |
| Maximum number of real-time insights |
|
|
|
| Maximum number of real-time insight fields (dimensions and measures) |
|
|
Data Actions Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Total number of data actions per tenant |
|
|
|
| Total number of data action targets per tenant |
|
|
This limit is enforced by the UI. |
| Total number of rules per data action |
|
|
Refers to the number of filter rules in a data action definition. |
Data Explorer Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Total number of columns queried at a time |
|
|
|
| Total number of rows displayed per page |
|
|
Data Federation Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Number of external data lake objects allowed per org |
|
The profiles in the external tables account towards the billable profile count limits and any extra profiles beyond the contracted entitlements would incur overages. |
Data Graph Guidelines and Limits
Data Ingestion Guidelines and Limits
General
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Maximum number of fields per data stream |
|
|
|
| Maximum number of single field types in a data stream |
|
|
|
| Total data size limit |
|
||
| Total number of CRM custom objects |
|
This limit applies to the number of custom objects ingested from each connected CRM org. | |
| Total Number of CRM orgs that can be connected to Data 360 |
|
||
| Total number of data models |
|
|
|
| Total number of data streams |
|
|
|
| Total number of data lake objects (DLOs) |
|
|
This limit applies to data lake objects created as part of data streams or in a standalone manner. |
| Total Number of Marketing Cloud Personalization (Interaction Studio) datasets per instance |
|
||
| Maximum number of rows for a Marketing Cloud Engagement Data Extension full extract |
|
|
GCS, Azure, and S3 Bulk Ingestion
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Optimal number of file rows when creating S3 data streams |
|
When creating an S3 data stream, you may need to use a file smaller than the data you’d like to ingest. The smaller file should contain the same headers and about 1,000 rows. Once the data stream is created, larger files can be placed within the file path, maintaining that they fall within set limits. | |
| Optimal size per CSV file |
|
Sometimes, particularly for Day 0 loads, it’s necessary to move large data volumes. Recommended limits are: 200 GB max per file and 1,000 files max per scheduled run. These two parameters aren’t intended to be stretched to their respective maximums concurrently. Rather consider some tradeoff between the two extremes and refer to the maximum data size guidelines for more context. | |
| Optimal size per parquet file |
|
||
| Maximum number of files per scheduled run |
|
||
| Maximum size per uncompressed CSV file |
|
||
| Maximum size per compressed CSV file |
|
Compressed CSV files aren’t splittable, meaning that they don't allow for parallel processing and their usage is therefore discouraged. If sending in a compressed file, it must be a single file per zip. | |
| Maximum size per uncompressed parquet file |
|
We recommend you to use the parquet compression capabilities to reduce file size. | |
| Maximum size per compressed parquet file |
|
Compressed parquet files are splittable and support parallel processing of ingestion. There are several native compression options. Learn more about supported compression formats in Supported File Formats in Data 360. | |
| Maximum data size for upsert |
|
When a data stream is on upsert, it’s best not to exceed a 50-GB aggregate across all files in the single run. This number is strongly influenced by the number of distinct data dates present in the table, the overall size of the table, and several other factors such as the number of columns. As these variable inputs increase you may experience performance degradation. For larger target tables and larger upsert ingestions, we strongly recommend using engagement tables and selecting an engagement date field that contains many distinct dates. Not using engagement nor selecting a high cardinality date field may cause performance degradation. |
|
| Maximum data size for full refresh |
|
Data size is the sum of all file sizes for a single datastream job. When a data stream is set for full refresh, it’s best not to exceed 1000 GB for all files in a single run. This number is strongly influenced by the number of distinct data dates present in the table, the overall size of the table, and several other factors such as the number of columns. As these input variables increase you may experience performance degradation. | |
| Maximum number of characters for a single cell | 70,000 |
|
Local File Upload
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Maximum number of characters for a single cell in a local file upload |
|
|
|
| Supported delimiters |
|
|
|
| Header row requirements |
|
|
|
| Unsupported refresh of data streams |
|
|
|
| Maximum number of data streams created from a local file upload |
|
|
|
| Unsupported file or column names |
|
|
|
| Primary key uniqueness restrictions |
|
We recommend that your primary key column is unique before uploading a local file. |
SFTP Ingestion
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Maximum size per file |
|
|
The SFTP Connector can load up to 1,000 files that are less than 2 GB but total no more than 30 GB per data stream run. If you have more data, you can use upsert mode for consecutive runs. |
| Maximum number of files included in a scheduled run |
|
|
|
| Maximum data per data stream run |
|
|
|
| Maximum size limit for file allowed during field analysis |
|
|
|
| Maximum file for PGP-encrypted schema file analysis |
|
|
|
| Maximum object size for fields |
|
|
|
| Maximum total size allowed for manual extract |
|
||
| Maximum total number of files allowed for manual extract |
|
|
Data Model Object Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Fields per data model object |
|
|
Both custom and system fields are counted toward this limit. |
| Maximum number of data model objects |
|
|
|
| Maximum number of custom relationships per data model object |
|
|
Data Shares Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Total number of data shares per org |
|
|
|
| Total number of data share targets per org |
|
|
This limit isn’t applicable for Salesforce Data 360 data share targets. |
| Maximum number of objects included in a data share |
|
|
Data Transforms Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Total number of streaming data transforms per org |
|
|
|
| Maximum number of characters in a streaming data transform SQL statement |
|
|
|
| Maximum number of batch transforms that can be created in an org |
|
|
Contact your Account Executive to adjust this limit. |
| Maximum runtime (duration) of a batch transform job before it’s canceled by the system |
|
|
|
| Maximum concurrent batch transforms that can be run at a time |
|
|
AI Model Builder Guidelines and Limits
Identity Resolution Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Total number of identity resolution rulesets |
|
|
This limit is enforced in the UI. Consolidating profiles using identity resolution reduces the number of Individual profiles counted against contracted entitlements. All unified profiles created from all rulesets are summed, so deleting unneeded rulesets reduces total unified profiles. Consolidating profiles using identity resolution impacts the number of known and anonymous profiles in your org. To understand how many anonymous profiles are allowed, review the Number of Anonymous Profiles limit. |
| Maximum scheduled job frequency per ruleset |
|
|
Because the time of day can vary, ruleset jobs can occasionally run more than once in a 24-hour period. Scheduled ruleset jobs are skipped if there are no changes to source data, object mappings, or ruleset configurations. This limit is enforced automatically and isn't configurable. When using Data 360 in a Developer org, ruleset runs aren’t scheduled. Use Run Ruleset to kick off a ruleset job manually. |
| Maximum number of ruleset jobs in any 24-hour period |
|
|
Ruleset jobs don’t run if there are no changes to source data, object mappings, or ruleset configurations, or if you’ve already run 4 ruleset jobs in the last 24 hours. |
| Maximum number of match rules per ruleset |
|
|
|
| Maximum number of match criteria per match rule |
|
|
|
| Maximum number of source profiles that can be unified into a single unified profile |
|
|
Large profiles usually result from poor data quality or match rules that are too broad. To reduce the number of matching records, clean your data or refine your match rules. |
| Maximum size of source records processed in a ruleset |
|
|
Records that are larger than 15 KB are skipped when rulesets run. |
| Maximum number of match rules per ruleset |
|
|
|
| Maximum number of match criteria per match rule |
|
|
|
| Maximum combined character count of values reviewed by a match rule |
|
|
If the sum of characters of values being matched by a single match rule is greater than 500, some values will be truncated during matching. This can result in incorrect matches. |
| Match methods used during real-time matching |
|
|
Segmentation Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Total number of active segments |
|
|
An active segment is a created segment with all functionality available. These limits are enforced by the UI. Learn more in Create a Segment in Data 360 and Segment Types and Statuses. |
| Total number of attributes with value suggestions |
|
|
Value suggestions can be enabled for up to 500 attributes for your entire org. Learn more in Use Value Suggestions in Segmentation. |
| Total number of standard concurrent publishes |
|
|
|
| Total number of filters |
|
|
You can have up to 50 filters each in the Include and Exclude tabs. Learn more in Segment Your Data in Attributes. |
| Total number of months that events are queried for standard segments |
|
|
The segmentation event limit is 24 months or less. This limit is enforced by the UI. For scheduled and manual publishes, segments error if a date range goes past the 24-month range. To run the segment again, adjust your event dates to fit that range. |
| Maximum number of scheduled publishes per standard segment per day |
|
|
This limit is enforced by the UI. Learn more in Publish a Segment in Data 360. When using Data 360 in a Developer org, segment publishes aren’t scheduled. Use Publish Now to kick off publication manually. |
| Maximum number of rapid segments |
|
||
| Total number of rapid concurrent publishes |
|
||
| Total number of days that events are queried for rapid segments |
|
The segmentation event limit is 7 days. This limit is enforced by the UI. For scheduled and manual publishes, segments error if a date range goes past the 7-day range. To run the segment again, adjust your event dates to fit that range. | |
| Maximum number of scheduled publishes per rapid segment per day |
|
||
| Real-time segments |
|
|
|
| Total number of processed records for rapid segments |
|
|
This limit is for records processed for a rapid segment, which is the sum of records present in the data streams associated with the segment. This limit isn’t associated with the segment population. |
| Maximum number of Einstein segments created per month |
|
|
Streaming Insights Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Aggregation time window |
|
|
Learn more about the differences between Streaming Insights and Calculated Insights. |
| Total number of dimensions |
|
|
|
| Total number of measures |
|
|
|
| Total number of Streaming Insights |
|
|
You can create more than 5 streaming insights. However, only the first 5 streaming insights run. |
| Unsupported features |
|
|
|
| Supported primary DMO objects |
|
Primary objects are filtered DMOs created from any of these streaming connectors:
|
|
| Allowed joins |
|
During the identity resolution process, unified objects from the Individual DMO include these objects:
Joins on objects unrelated to Individual or objects related to Account aren't allowed. |
Unstructured Data and Search Index Guidelines and Limits
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Unstructured data and search indexes |
|
|
API Guidelines and Limits
The concurrent request and query limits of Data 360 APIs are independent from the concurrency limits governing the Salesforce platform APIs.
APIs: Query, Insights, and Profile
There are three distinct classes of APIs used to extract data: Profile, Query, and Calculated Insights. These API limits for CDP orgs are based on the org’s Data 360 edition, not its Salesforce edition, except when running Data 360 on a Developer org.
Ingest Bulk API
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| API usage limits |
|
|
After each request, your app must check the response code. The HTTP 429 Too Many Requests status code indicates that the app must reduce its request frequency. |
| Bulk job retention time |
|
|
Open bulk jobs with the status of Open or Upload Complete that are older than 7 days are deleted from the ingestion queue. |
| Maximum number of files per job |
|
||
| Maximum payload size |
|
|
Referring to the CSV files uploaded via Bulk API. |
| Number of requests or jobs allowed per hour |
|
||
| Number of concurrent requests or jobs Allowed at One Time |
|
Ingest Streaming API
These limits and guidelines are for streaming ingestion across Mobile and Web SDK and Ingestion API.
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| API usage limits |
|
|
After each request, your app must check the response code. The HTTP 429 Too Many Requests status code indicates that the app must reduce its request frequency. |
| Expected latency for Ingestion API |
|
Data is processed asynchronously approximately every 3 minutes. The latency means that received records are processed every two minutes, but depending on the data volume and how busy the scheduler is, it can take additional time until the data is committed to storage and available for consumption. |
|
| Expected latency for Mobile and Web SDK and server-to-server applications |
|
To reduce processing power and bandwidth requirements, mobile events are queued on the device. | |
| Maximum number of records that can be deleted via Ingestion API deletion |
|
|
|
| Maximum payload size per request |
|
|
JSON data uploaded via Streaming API have a maximum body size of 200 KB per request. The HTTP 403 Forbidden status code indicates that the API request exceeded the 200 KB limit. |
| Total number of requests per second across all Ingestion API object endpoints |
|
![]() |
The HTTP 429 Too Many Requests status code indicates that the app has exceeded the 250-request limit and must reduce its request frequency. |
Profile API
| Feature or Function | Guidelines and Limits | Hard Limit? | Additional Information and Resources |
|---|---|---|---|
| Maximum number of records returned per call |
|
|
|
| Total number of fields per record |
|
|


