Resynchronize Remote after Data Loss

The procedure on this page resynchronizes the contents of a AIStor bucket using a healthy replication remote. Resynchronization supports recovery after partial or total loss of data on a AIStor deployment in a replica configuration.

For example, consider a AIStor active-active replication configuration similar to the following:

Active-Active Replication synchronizes data between two remote deployments.

Resynchronization allows using the healthy data on one of the participating AIStor deployments as the source for rebuilding the other deployment.

Resynchronization is a per-bucket process. You must repeat resynchronization for each bucket on the remote which suffered partial or total data loss.

Requirements

Replication requires all participating clusters meet the bucket replication requirements. This procedure assumes you have reviewed and validated those requirements.

The following additional requirements apply when resynchronizing data.

AIStor Deployments Must Be Online

Resynchronization requires both the source and target deployments be online and able to accept read and write operations. The source must have complete network connectivity to the remote.

The remote deployment may be “unhealthy” in that it has suffered partial or total data loss. Resynchronization addresses the data loss as long as both source and destination maintain connectivity.

Resynchronization Requires Existing Replication Configuration

Resynchronization requires the healthy source deployment have an existing replication configuration for the unhealthy target bucket. Additionally, resynchronization only applies to those replication rules created with the existing object replication option.

Use mc replicate ls to review the configured replication rules and targets for the healthy source bucket.

Considerations

Resynchronization Requires Time

Resynchronization is a background processes that continually checks objects in the source AIStor bucket and copies them to the remote as-needed. The time required for replication to complete may vary depending on the number and size of objects, the throughput to the remote AIStor deployment, and the load on the source AIStor deployment. Total time for completion is generally not predictable due to these variables.

AIStor recommends configuring load balancers or proxies to direct traffic only to the healthy cluster until synchronization completes. The following commands can provide insight into the resynchronization status:

  • mc replicate resync status on the source to track the resynchronization progress.
  • mc replicate status on the source and remote to track normal replication data.
  • Run mc ls -r --versions ALIAS/BUCKET | wc -l against both source and remote to validate the total number of objects and object versions on each.

Resynchronize Objects after Data Loss

  1. List the Configured Replication Targets on the Healthy Source.

    Run the mc replicate ls command to list the configured remote targets on the healthy SOURCE deployment for the BUCKET that requires resynchronization.

    mc replicate ls SOURCE/BUCKET --json
    
    • Replace SOURCE with the alias of the source AIStor deployment.
    • Replace BUCKET with the name of the bucket to use as the source for resynchronization.

    The output resembles the following:

    {
       "op": "",
       "status": "success",
       "url": "",
       "rule": {
          "ID": "cer1tuk9a3p5j68crk60",
          "Status": "Enabled",
          "Priority": 0,
          "DeleteMarkerReplication": {
             "Status": "Enabled"
          },
          "DeleteReplication": {
             "Status": "Enabled"
          },
          "Destination": {
             "Bucket": "arn:minio:replication::UUID:BUCKET"
          },
          "Filter": {
             "And": {},
             "Tag": {}
          },
          "SourceSelectionCriteria": {
             "ReplicaModifications": {
                "Status": "Enabled"
             }
          },
          "ExistingObjectReplication": {
             "Status": "Enabled"
          }
       }
    }
    

    Each document in the output represents one configured replication rule. The Destination.Bucket field specifies the ARN for a given rule on the bucket. Identify the correct ARN for the Bucket from which you want to resynchronize objects.

  2. Start the Resynchronization Procedure.

    Run the mc replicate resync start command to begin the resynchronization process:

    mc replicate resync start --remote-bucket "arn:minio:replication::UUID:BUCKET" SOURCE/BUCKET
    
    • Replace the --remote-bucket value with the ARN of the unhealthy BUCKET on the TARGET AIStor deployment.
    • Replaced SOURCE with the alias of the source AIStor deployment.
    • Replace the BUCKET with the name of the bucket on the healthy SOURCE AIStor deployment.

    The command returns a resynchronization job ID indicating that the process has begun.

  3. Monitor Resynchronization.

    Use the mc replicate resync status command on the source deployment to track the received replication data:

    mc replicate resync status ALIAS/BUCKET
    

    The output resembles the following:

    mc replicate resync status /data
    Resync status summary:
    ● arn:minio:replication::6593d572-4dc3-4bb9-8d90-7f79cc612f01:data
       Status: Ongoing
       Replication Status | Size (Bytes)    | Count
       Replicated         | 2.3 GiB         | 18
       Failed             | 0 B             | 0
    

    The Status updates to Completed once the resynchronization process completes.

  4. Next Steps

    • If the TARGET bucket damage extends to replication rules, you must recreate those rules to match the previous replication configuration. See Enable Two-Way Server-Side Bucket Replication for additional guidance.
    • Perform basic validation that all buckets in the replication configuration show similar results for commands such as mc ls and mc stat.
    • After restoring any replication rules and verifying replication between sites, you can configure the reverse proxy, load balancer, or other network control plane managing connections to resume sending traffic to the resynchronized deployment.
All rights reserved 2024-Present, MinIO, Inc.