Expand an AIStor Deployment on Linux
The following procedures add a server pool to an existing AIStor cluster running on bare-metal infrastructure. Each pool expands the total available storage capacity of the cluster while maintaining the overall availability of the cluster.
A new pool is a set of new nodes and drives that meet the cluster’s erasure set requirements and parity levels.
You can use MinIO’s Erasure Code Calculator to check the Erasure Code Stripe Size (K+M) of your new pool to confirm that it meets the EC:M value of current cluster.
MinIO strongly recommends seeking an architecture review from MinIO Engineering through SUBNET to analyze potential performance impacts before committing to a new pool for your cluster.
All commands provided below use example values. Replace these values with those appropriate for your cluster.
Pool Hot-Reload (Configuration File Method)
--config flag.
If your cluster uses environment variables or command-line arguments to define pools, use the environment variable method instead.
The pool hot-reload method adds storage pools to a running cluster without restarting the server.
This method uses SIGHUP to trigger a configuration reload, allowing the cluster to detect and integrate new pools dynamically.
Requires AIStor RELEASE.2026-02-02T23-40-11Z or later.
-
Deploy AIStor on new hardware.
Follow the installation guide to install AIStor onto the new hardware.
Do not start the new cluster - stop at the step Review the environment file.
-
Update the configuration file.
On one of the existing cluster nodes, edit the configuration file to add the new pool to the
poolsarray.For example, if your current configuration defines a single pool:
version: v2 address: ":9000" console-address: ":9001" rootUser: "minioadmin" rootPassword: "secret-CHANGE-ME" pools: - args: - "https://minio-{1...16}.example.net:9000/mnt/drive-{1...16}/minio"Add a second pool by appending another entry to the
poolsarray:version: v2 address: ":9000" console-address: ":9001" rootUser: "minioadmin" rootPassword: "secret-CHANGE-ME" pools: - args: - "https://minio-{1...16}.example.net:9000/mnt/drive-{1...16}/minio" - args: - "https://minio-{17...32}.example.net:9000/mnt/drive-{1...16}/minio"Append new pools at the end of the list of existing pools. The order of existing pools must be preserved.
Save the file and copy the updated configuration file to all nodes in the cluster, including both existing and new pool nodes. Use
shasumor a similar utility to confirm that the configuration matches on all existing and new nodes. -
Start the new pool nodes.
On the new pool nodes, start the AIStor service:
systemctl start minio -
Trigger the configuration reload.
Send a
SIGHUPsignal to the AIStor server process on any node in the existing cluster:systemctl reload minioYou only need to send the signal to one node. The server validates the configuration across all cluster nodes and integrates the new pool if validation succeeds.
-
Monitor the cluster status.
Use the
mc admin infocommand to verify the new pool appears in the cluster:mc admin info ALIASMonitor the server logs on all nodes to observe the pool addition process:
journalctl -u minio -fThe logs should show messages indicating the new pool was successfully integrated.
-
Next steps
After validating cluster health and expanded storage, update any load balancers, reverse proxies, or other network management tools with the new hostnames of the added server pool. This ensures that incoming connections balance appropriately across the expanded topology.
Environment Variable Method
This method requires restarting the cluster to apply the new pool configuration.
-
Deploy AIStor on new hardware.
Follow the installation guide to install AIStor onto the new hardware.
Do not start the new cluster - stop at the step Review the environment file
-
Update the cluster topology.
AIStor uses the
MINIO_VOLUMESenvironment variable for determining the cluster topology.When adding a new pool, append the new set of hostnames and volumes to the existing set. For example, consider the following
MINIO_VOLUMESdescribing a single-pool consisting of 16 nodes and 16 drives:MINIO_VOLUMES="https://minio-{1...16}.example.net:9000/mnt/drive-{1...16}/minio"To expand that topology with another 16x16 cluster, append the new set of hosts to the existing value:
MINIO_VOLUMES="https://minio-{1...16}.example.net:9000/mnt/drive-{1...16}/minio https://minio-{17...32}.example.net:9000/mnt/drive-{1...16}/minio""You must include the protocol
http[s]://for each new pool. All pools should have matching protocols.Make this change to all hosts in the AIStor cluster, including the existing pools. The environment files should match across all nodes. Use
shasumor a similar utility to validate that all contents match. -
Initialize the new cluster topology.
You must simultaneously
- Restart the existing AIStor cluster and
- Start the new AIStor server nodes
For the existing nodes, you can use the
mc admin service restartcommand to restart all nodes at once. For the new nodes, usesystemctl start minioon all nodes in parallel using your preferred shell tooling. -
Monitor the cluster status.
Use the
mc admin infocommand andjournalctl -u miniosystem utility to monitor status and logging across the cluster. While the nodes come online and synchronize, the system logs may showcase increased churn through warnings and errors. -
Next steps
After validating cluster health and expanded storage, update any load balancers, reverse proxies, or other network management tools with the new hostnames of the added server pool. This ensures that incoming connections balance appropriately across the expanded topology.