Benchmarking

MinIO AIStor includes built-in performance testing tools to measure object operations, drive I/O, network bandwidth, site replication throughput, and client-to-server bandwidth.

For OS-level tuning recommendations, see Performance Tuning.

mc support perf

The mc support perf command runs performance tests against a MinIO AIStor deployment.

Run all performance tests:

mc support perf ALIAS

Run a specific test type:

mc support perf [object|drive|net|site-replication|client] ALIAS

Common flags

Flag Description Default
--size Object size for uploads and downloads 64 MiB
--verbose Display per-server statistics false
--duration Maximum duration for each test 10s
--concurrent Concurrent requests per server 32
--airgap Save results locally instead of uploading false

Object performance test

Tests S3 PUT and GET throughput across the cluster:

mc support perf object ALIAS

Example output:

MinIO 2024.01.01, 4 servers, 16 drives, 64MiB objects, 32 threads
PUT: 2.5 GiB/s, 40 objs/s
GET: 3.2 GiB/s, 51 objs/s

Use --verbose to see per-server breakdown:

mc support perf object ALIAS --verbose

Expected results:

  • Throughput scales linearly with server count.
  • GET is typically 20-30% faster than PUT.
  • Consistent per-server numbers indicate balanced load.

Drive performance test

Tests raw disk I/O performance on each drive:

mc support perf drive ALIAS

Additional flags for the drive test:

Flag Description Default
--filesize Total data per drive 1 GiB
--blocksize Read/write block size 4 MiB
--serial Test drives one-by-one false

Troubleshoot slow drives:

  1. Check for failing drives: mc admin drive info ALIAS
  2. Verify storage configuration.
  3. Check for thermal throttling.
  4. Review filesystem fragmentation.

Network performance test

Tests inter-node network bandwidth. Requires multiple servers in the cluster.

mc support perf net ALIAS

Results should approach physical network capacity. Troubleshoot network issues:

  1. Check switch configuration.
  2. Verify MTU settings (jumbo frames if supported).
  3. Check for packet loss: ping -c 100 -s 1472 <node>
  4. Review network interface errors with ethtool -S.

Site replication performance test

Tests cross-site replication bandwidth for multi-site deployments:

mc support perf site-replication ALIAS

Client performance test

Tests bandwidth from the mc client to the MinIO AIStor cluster:

mc support perf client ALIAS

Saving results

Airgapped mode

Save results locally instead of uploading to SUBNET:

mc support perf ALIAS --airgap

Output is saved to ALIAS-perf_YYYYMMDDHHMMSS.zip.

JSON output

Get results in JSON format for programmatic consumption:

mc support perf ALIAS --json

Best practices

When to benchmark

  1. Initial deployment: Establish baseline performance.
  2. Hardware changes: After adding or replacing drives or nodes.
  3. Configuration changes: After tuning parameters.
  4. Troubleshooting: When investigating performance issues.
  5. Capacity planning: Before adding workload.

Benchmark conditions

For consistent results:

  1. Run during low-traffic periods.
  2. Ensure cluster is healthy: mc admin info ALIAS
  3. Run multiple iterations.
  4. Test with representative object sizes.
  5. Document system configuration.

Tracking results over time

Date Object PUT Object GET Drive R/W Network
2024-01-01 2.5 GiB/s 3.2 GiB/s 1.1 GiB/s 9.5 Gb
2024-02-01 2.4 GiB/s 3.1 GiB/s 1.1 GiB/s 9.4 Gb
2024-03-01 2.0 GiB/s 2.8 GiB/s 0.8 GiB/s 9.5 Gb

A significant drop (greater than 10%) warrants investigation.

Warp benchmarking tool

For detailed S3 benchmarking, use warp:

go install github.com/minio/warp@latest

warp mixed --host=server1:9000,server2:9000 \
  --access-key=minioadmin --secret-key=minioadmin

Warp provides:

  • Detailed latency percentiles
  • Mixed workload simulation
  • Configurable object size distributions
  • Multi-client coordination