Decommission Aged Hardware
MinIO AIStor supports decommissioning and removing server pools from a deployment with two or more pools. To decommission, there must be at least one remaining pool with sufficient available space to receive the objects from the decommissioned pools.
MinIO AIStor supports queueing multiple pools in a single decommission command. Each listed pool immediately enters a read-only status, but draining occurs one pool at a time.
Decommissioning is designed for removing an older server pool whose hardware is no longer sufficient or performant compared to the pools in the deployment. MinIO AIStor automatically migrates data from the decommissioned pools to the remaining pools in the deployment based on the ratio of free space available in each pool.
During the decommissioning process, MinIO AIStor routes read operations (for example, GET, LIST, HEAD) normally.
MinIO AIStor routes write operations (for example, PUT, versioned DELETE) to the remaining active pools in the deployment.
Versioned objects maintain their ordering throughout the migration process.
The procedures on this page decommission and remove one or more server pools from an MinIO AIStor deployment with at least two server pools.
Online decommissioning with zero downtime
MinIO AIStor’s decommissioning process is designed to keep the cluster fully operational for applications throughout the entire hardware retirement cycle. This is a fundamental architectural distinction from most storage systems, where hardware maintenance requires downtime, manual intervention, or significant performance degradation.
How it works
MinIO AIStor decommissions an entire server pool with a single command:
mc admin decommission start ALIAS POOL
Once started, MinIO AIStor:
- Immediately suspends the target pool from receiving new write operations.
- Continues serving read requests from the decommissioning pool for objects that have not yet migrated.
- Automatically migrates all objects to remaining pools, distributing data based on available free space.
- Processes queued pools serially, one pool at a time, to minimize impact on application workloads.
Applications require no changes. Reads, writes, deletes, and list operations all continue without interruption. MinIO AIStor handles the routing transparently.
Multiple pool queuing
You can queue multiple pools for decommissioning in a single operation. MinIO AIStor places all queued pools into read-only status immediately, then drains them one at a time in order. This serial approach ensures that the cluster dedicates its resources to moving data efficiently without overwhelming the remaining pools.
Why this matters
Traditional storage systems require administrators to schedule maintenance windows, manually migrate data, or accept degraded performance during hardware retirement. MinIO AIStor treats hardware lifecycle management as a first-class operation rather than an exceptional maintenance event.
The following table compares MinIO AIStor’s decommission process to common alternatives:
| Capability | MinIO AIStor | Traditional SAN/NAS | Software-Defined Storage | Parallel File Systems |
|---|---|---|---|---|
| Cluster downtime | None | Often required for controller or shelf swaps | Not required, but performance degrades | Not required, but disruptive |
| Scope of operation | Entire pool (multiple servers and drives) | Per-controller or per-shelf | Per-drive | Per-storage target |
| Start the process | Single command | Multi-step, varies by vendor | Multi-step per drive, manual cleanup required | Multi-step per target, manual file migration |
| Application changes | None | May require LUN remapping or path reconfiguration | None, but homogeneous hardware may be required | May require client reconfiguration |
| Automatic data migration | Yes, to all remaining pools | Depends on method, may require manual migration | Yes, but may stall and require manual intervention | No, requires explicit per-file migration |
| Manual intervention | None | LIF, LUN, or policy management | Often required to unstick stalled rebalancing | Required throughout the process |
| Paid add-on required | No | May require migration licenses | No | No |
Key differences:
- Traditional SAN/NAS systems often operate at the controller or shelf level, requiring multi-step procedures that may involve LUN remapping, path reconfiguration, or scheduling maintenance windows. Some methods are explicitly documented as disruptive by the vendor.
- Software-defined storage platforms typically operate at the individual drive level rather than the pool level. Removing a single drive triggers data redistribution that can take weeks and may stall, requiring manual intervention. Some platforms also restrict shrinking operations to homogeneous hardware configurations.
- Parallel file systems require administrators to manually identify and migrate every file from the retiring storage target. Clients may need reconfiguration before the target can be removed, and the removal process requires running maintenance commands that force server restarts.
MinIO AIStor decommissions an entire server pool, handles all data migration automatically, and keeps the cluster fully available to applications throughout the process.
Hot pool reload for config.yaml deployments
Deployments that use a startup configuration file (--config config.yaml) can add new pools and remove fully decommissioned pools without restarting the cluster.
To apply pool changes:
-
Update the
config.yamlfile on all nodes to include the new pool definition or remove the decommissioned pool entry. -
Send a
SIGHUPsignal to the MinIO AIStor process on each node:kill -HUP $(pgrep minio)For
systemd-managed deployments:systemctl reload minio
MinIO AIStor validates the configuration across all nodes before applying changes.
All nodes must have an identical config.yaml file for the reload to succeed.
This allows administrators to complete the full hardware lifecycle, expand with new pools, decommission old pools, and remove decommissioned pools, without any cluster downtime or application interruption.
Prerequisites
Back up cluster settings first
Use the following commands to back up your cluster settings before starting the decommissioning process:
You can use these snapshots to restore bucket, IAM, and cluster settings to recover from user or process errors as necessary.
Networking and firewalls
Each node should have full bidirectional network access to every other node in the deployment.
For containerized or orchestrated infrastructures, this may require specific configuration of networking and routing components such as ingress or load balancers.
Certain operating systems may also require setting firewall rules.
For example, the following command explicitly opens the default MinIO AIStor API port 9000 on servers using firewalld:
firewall-cmd --permanent --zone=public --add-port=9000/tcp
firewall-cmd --reload
If you set a static MinIO AIStor Console port (e.g. :9001) you must also grant access to that port to ensure connectivity from external clients.
Use strongly recommends using a load balancer to manage connectivity to the cluster. The Load Balancer should use a “Least Connections” algorithm for routing requests to the MinIO AIStor deployment, since any MinIO AIStor node in the deployment can receive, route, or process client requests.
The following load balancers are known to work well with MinIO:
Configuring firewalls or load balancers to support MinIO AIStor is out of scope for this procedure.
Deployment must have sufficient storage
The decommissioning process migrates objects from the target pool to other pools in the deployment. The total available storage on the deployment must exceed the total storage of the decommissioned pool.
Use the Erasure Code Calculator to determine the usable storage capacity. Then reduce that by the size of the objects already on the deployment.
For example, consider a deployment with the following distribution of used and free storage:
| Pool | Used Capacity | Total Capacity |
|---|---|---|
| Pool 1 | 100TB Used | 200TB Total |
| Pool 2 | 100TB Used | 200TB Total |
| Pool 3 | 100TB Used | 200TB Total |
Decommissioning Pool 1 requires distributing the 100TB of used storage across the remaining pools. Pool 2 and Pool 3 each have 100TB of unused storage space and can safely absorb the data stored on Pool 1.
However, if Pool 1 were full (that is, 200TB of used space), decommissioning would completely fill the remaining pools and potentially prevent any further write operations.
Considerations
Pool names for config.yaml deployments
Deployments that use a startup configuration file may display pool names as hash:<hex> identifiers instead of readable endpoint strings.
This occurs when a pool definition contains multiple args entries (for example, heterogeneous hostnames or IP addresses).
For example, mc admin decommission status may return output similar to the following for a config.yaml deployment:
┌─────┬─────────────────────┬──────────────────────────────────┬────────┐
│ ID │ Pools │ Capacity │ Status │
│ 1st │ hash:a10f38e84ec12b │ 10 TiB (used) / 10 TiB (total) │ Active │
│ 2nd │ hash:f4e21b6c9d07a3 │ 60 TiB (used) / 100 TiB (total) │ Active │
│ 3rd │ hash:c7b92d1fa83e05 │ 40 TiB (used) / 100 TiB (total) │ Active │
└─────┴─────────────────────┴──────────────────────────────────┴────────┘
To decommission the 1st pool, specify the full hash:<hex> string as the pool argument:
mc admin decommission start myaistor/ hash:a10f38e84ec12b
For more details, see Pool naming.
Replacing a server pool
For hardware upgrade cycles where you replace old pool hardware with a new pool, you should complete the expansion before starting the decommissioning of the old pool. Adding the new pool first allows the decommission process to transfer objects in a balanced way across all available pools, both existing and new.
Decommissioning requires that a cluster’s topology remain stable throughout the pool draining process. Do not attempt to perform expansion and decommission changes in a single step.
Decommissioning is resumable
MinIO AIStor resumes decommissioning if interrupted by transient issues such as deployment restarts or network failures.
For manually cancelled or failed decommissioning attempts, MinIO AIStor resumes only after you manually re-initiate the decommissioning operation.
The pool remains in the decommissioning state regardless of the interruption. A pool can never return to active status after decommissioning begins.
Decommissioning is non-disruptive
Removing a decommissioned server pool requires restarting all MinIO AIStor nodes in the deployment at around the same time.
MinIO strongly recommends restarting all Object Store processes in a deployment simultaneously. MinIO AIStor operations are atomic and strictly consistent. As such the restart procedure is non-disruptive to applications and ongoing operations.
Do not perform “rolling” (that is, one node at a time) restarts.
Decommissioning ignores expired objects and trailing DeleteMarker
Decommissioning ignores objects where the only remaining version is a DeleteMarker.
This avoids creating empty metadata on the remaining server pool(s) for objects that are effectively fully deleted.
Decommissioning also ignores object versions which have expired based on the configured lifecycle rules for the parent bucket.
You can monitor ignored delete markers and expired objects during the decommission process with mc admin trace --call decommission.
Once the decommissioning process completes, you can safely shut down that pool.
Since the only remaining data was scheduled for deletion or was only a DeleteMarker, you can safely clear or destroy those drives as per your internal procedures.
Behavior
Final listing check
At the end of the decommission process, MinIO AIStor checks for a list of items on the pool. If the list returns empty, MinIO AIStor marks the decommission as successfully completed. If any objects return, MinIO AIStor returns an error that the decommission process failed.
If the decommission fails, customers should open a SUBNET issue for further assistance before retrying the decommission.
Decommissioning a server with tiering enabled
For deployments with tiering enabled and active, decommissioning moves the object references to a new active pool.
Applications can continue issuing GET requests against those objects where MinIO AIStor handles transparently retrieving them from the remote tier.
Unreadable objects
The decommissioning process ignores objects that do not have sufficient quorum for a read operation.
For example, objects that were never successfully written or objects with lost shards due to bitrot or drive loss.
MinIO AIStor cannot reconstruct objects that have lost read quorum. It is not possible to move lost objects to another drive. As a result, the decommission process skips lost objects and continues on the remaining objects in the pool.
Lifecycle policy integration
Decommissioning respects lifecycle management policies when migrating objects.
MinIO AIStor skips object versions that lifecycle rules would delete, such as versions exceeding NewerNoncurrentVersions limits.
This avoids unnecessary data copying during decommission.
MinIO AIStor dynamically reloads lifecycle policies during decommission, picking up policy changes within one minute. If you modify lifecycle policies during a decommission operation, MinIO AIStor detects the change and adjusts its behavior accordingly.
Requires AIStor Server RELEASE.2026-02-02T23-40-11Z or later.