Loading

Salesforce Org Migrations overview

Date de publication: May 16, 2025
Description

Use this article to understand what Salesforce organization migrations are, and how they work within Salesforce's infrastructure.

 

Résolution


What is an Org?

An org is the virtual space provided to an individual customer of Salesforce, and includes all customer data and applications. It’s composed of Systems of Record (SOR) that store customer data and metadata such as:

  • Relational Database (DB)
  • NoSQL Database (HBase)
  • FileForce (Keystone)

In addition to these datastores there are other services that may store state for the org (search indexes in Solr, Domain Name System (DNS) for MyDomains, etc). This is not an exhaustive list of datastores and the list grows as we improve our infrastructure. For more information please refer to Salesforce architecture documentation.

What is an Org Migration?

An org migration is a set of processes and technologies that move a production org from a source Salesforce instance to a target Salesforce instance. The org move is orchestrated by copying and/or regenerating customer data and metadata.

What Happens During an Org Migration Window?

During an org migration window, the org experiences a period of time where org access is in read-only mode . This is referred to as the org migration event window. This event window is required to let the migration processes make a consistent copy of the org’s data. The org migration tool orchestrates the copy of data from all source pod SORs to corresponding target pod SORs. Once all SOR data has been successfully copied, the org is ready to be activated on the target instance.

This document explains the mechanisms used to copy customer data from the 3 SORs.

Relational Database (DB)

Since Salesforce is a multi-tenant system, i.e. multiple tenants share database resources, standard database migration tools are not usable for our use case. Instead we built a custom data copy tool. This tool is run on Salesforce application servers. It has Copy and Validation stages.

Copy

Data is stored in database tables. Table copy takes place in chunks. Chunks may be copied in parallel for performance. The following steps are involved in copying a chunk:

1. Read the data from the source pod’s database using standard Structured Query Language (SQL) through Java Database Connectivity (JDBC)

2. Transport the data to the target pod over HTTPS using end-to-end Transport Layer Security (TLS) encryption

3. Insert the data into the target pod's database using standard SQL through JDBC. In the event of a chunk copy failure (failures could happen anywhere in the pipeline - source, network, target), the entire chunk copy is retried. There is no limit on the number of retries.

Validation

After all table data has been copied, we run a validation process. This process performs the following steps on every table:

1. Run SQL queries using JDBC to collect the row count and checksum (of a subset of fields) on the source pod’s database and target pod’s database

2. Compare row counts and checksums from source and target pods.

If all the validations complete successfully, the Relational Database copy has completed successfully.

FileForce (Keystone)

Keystone is composed of a metadata catalog and a store for extents. The metadata catalog is stored in the relational database as tables. The extent store is API driven and globally accessible -- target instances can access extents on source instances. 

During the org migration window, we copy just the metadata catalog from source pod to target pod using the mechanism described in the previous section, "Relational Database (DB)". After the migration the metadata catalog on the target instance will point to extents on the source instance, and requests to access files will fetch extents from the source instance initially. 

Migration of data on the extent stores from source to target instance is done asynchronously outside of the org migration window. This is done to reduce org read-only downtime.

NoSQL Database (HBase)

Migrating data between two Bigdata (HBase) clusters is a two pronged process. First, a replication process is enabled between source and target, so that any data written on the source side starts appearing at the target cluster. Second, the migration copies the data that already exists from source to target. This is a longer process and the duration depends on how much data exists to be copied over.

Specifically, when a org is scheduled for migration the steps done are:

1. Setup cross cluster trust, so that processes on the source cluster can communicate with those on the target cluster. Our clusters are Kerberos secured and need cross trust setup.

2. Initiate the replication start job such that data starts to flow to target cluster (only for the org).

3. Initiate copy job to copy the existing data for the org.

4. Copy metadata for the given org to the target cluster. This is needed for copying over Phoenix table views that won’t get replicated (in the current release).

5. Ensure data copy jobs complete.

6. Initiate teardown of trust between the two clusters.

What Happens After an Org Migration?

On activation, the org will once again accept read-write requests serviced on the target pod. New writes will land on the target pod SOR stores. 

Migration of FileForce data on the source instance occurs asynchronously after the org has been activated. A periodic process on the target instance scans all Keystone metadata for the local store copies for extents. Since the recently migrated org will have extents still being accessed from source stores, a copy process enqueues store-to-store copy operations for all extent data for the migrated org. These operations get retried until success. For encrypted blobs, the store-to-store copy operations have an additional level of security, where decryption keys are stored as metadata rows and copied separately as part of the Relational Database copy process. 

Given the asynchronous nature of the FileForce copy process copying to the target FileForce stores can take up to 2 weeks after the org is activated. At all times during this copy process uses have uninterrupted access to all their FileForce data. In certain org migrations performed for data residency purposes, once the Relational Database copy has completed, the FileForce copy process is immediately triggered to expedite the FileForce copy process. Even in this case it could still take up to 2 weeks for all data to be copied to the target instance. 

Sandboxes

Sandboxes are not moved as part of an org migration. Sandboxes remain on the CS instance where they were located until they are refreshed. Existing sandboxes remain where they are until they are either deleted by the customer or the sandbox is refreshed, at which point the new sandbox org will be created in the same region as the production org. Metadata migration tools such as Change Sets and Ant continue to operate exactly as they did when the Sandbox and Production instances were in the same region as each other.
 

Search Operations

During an org migration, search data is transferred from source servers to target servers. Source servers are backed up as part of standard search backup procedures. However, backups for an org scheduled for migration are executed with a higher priority to ensure that all data is up to date. No customer action is required.

During the migration, search data backups are restored on the target servers. No data is removed from the source. Doing so allows Salesforce to easily roll back or abort operations, with no customer impact.

The restore process generally ends during off-business hours. For large orgs, the restore phase can take longer, which can impact search-dependent operations (for example - record lookup, content search).
 

See Also

How to Prepare for an Org Migration
 

Numéro d’article de la base de connaissances

000384180

 
Chargement
Salesforce Help | Article