Distributed MinIO Quickstart Guide Slack Go Report Card Docker Pulls

MinIO in distributed mode lets you pool multiple drives (even on different machines) into a single object storage server. As drives are distributed across several nodes, distributed MinIO can withstand multiple node failures and yet ensure full data protection.

Why distributed MinIO?

MinIO in distributed mode can help you setup a highly-available storage system with a single object storage deployment. With distributed MinIO, you can optimally use storage devices, irrespective of their location in a network.

Data protection

Distributed MinIO provides protection against multiple node/drive failures and bit rot using erasure code. As the minimum disks required for distributed MinIO is 4 (same as minimum disks required for erasure coding), erasure code automatically kicks in as you launch distributed MinIO.

High availability

A stand-alone MinIO server would go down if the server hosting the disks goes offline. In contrast, a distributed MinIO setup with n disks will have your data safe as long as n/2 or more disks are online. You'll need a minimum of (n/2 + 1) Quorum disks to create new objects though.

For example, an 16-node distributed MinIO setup with 16 disks per node would continue serving files, even if up to 8 servers are offline. But, you'll need at least 9 servers online to create new objects.

Limits

As with MinIO in stand-alone mode, distributed MinIO has a per tenant limit of minimum of 2 and maximum of 32 servers. There are no limits on number of disks across these servers. If you need a multiple tenant setup, you can easily spin up multiple MinIO instances managed by orchestration tools like Kubernetes, Docker Swarm etc.

Note that with distributed MinIO you can play around with the number of nodes and drives as long as the limits are adhered to. For example, you can have 2 nodes with 4 drives each, 4 nodes with 4 drives each, 8 nodes with 2 drives each, 32 servers with 64 drives each and so on.

You can also use storage classes to set custom data and parity distribution per object.

Consistency Guarantees

MinIO follows strict read-after-write and list-after-write consistency model for all i/o operations both in distributed and standalone modes.

Get started

If you're aware of stand-alone MinIO set up, the process remains largely the same. MinIO server automatically switches to stand-alone or distributed mode, depending on the command line parameters.

1. Prerequisites

Install MinIO - MinIO Quickstart Guide.

2. Run distributed MinIO

To start a distributed MinIO instance, you just need to pass drive locations as parameters to the minio server command. Then, you’ll need to run the same command on all the participating nodes.

Note

Example 1: Start distributed MinIO instance on 32 nodes with 32 drives each mounted at /export1 to /export32 (pictured below), by running this command on all the 32 nodes:
Distributed MinIO, 32 nodes with 32 drives each

GNU/Linux and macOS

export MINIO_ACCESS_KEY=<ACCESS_KEY>
export MINIO_SECRET_KEY=<SECRET_KEY>
minio server http://host{1...32}/export{1...32}

NOTE: {1...n} shown have 3 dots! Using only 2 dots {1..32} will be interpreted by your shell and won't be passed to minio server, affecting the erasure coding order, which may impact performance and high availability. Always use ellipses syntax {1...n} (3 dots!) for optimal erasure-code distribution

3. Test your setup

To test this setup, access the MinIO server via browser or mc.

Explore Further