mc admin cordon

Syntax

The mc admin cordon command removes a node from active service for maintenance purposes. By default, the command initiates a graceful drain before fully cordoning the node, allowing existing client connections to complete before the node goes offline.

Once cordoned, a node:

  • Is disconnected from the cluster’s internal grid communication.
  • Rejects all incoming RPC calls immediately.
  • Does not participate in cluster operations.
  • Does not receive metadata updates or replication tasks.

See Node Maintenance for a complete procedure on cordoning and uncordoning nodes.

mc admin [GLOBALFLAGS] cordon [FLAGS] ALIAS NODE

Parameters

ALIAS

Required

The alias of the AIStor cluster containing the node to cordon.

NODE

Required

The address of the node to cordon, in the format hostname:port.

For example: node1.example.com:9000

--no-drain

Optional

Immediately terminate all in-progress requests to this node and transition directly to the cordoned state.

Global flags

This command supports any of the global flags.

Examples

Cordon a node with graceful drain

The following command cordons node1.example.com:9000 on the myaistor cluster with the default drain behavior:

mc admin cordon myaistor node1.example.com:9000

The command returns:

Node has started draining

Cordon a node immediately

The following command immediately cordons node1.example.com:9000 without waiting for existing connections to complete:

mc admin cordon --no-drain myaistor node1.example.com:9000

The command returns:

Node cordoned successfully

Permissions

This command requires the admin:ServiceCordon permission.

Behavior

State persistence

AIStor persists the cordon state to .minio.sys/cordoned.json. If a draining node restarts before the drain completes, it automatically transitions to the fully cordoned state. The state persists until explicitly cleared with mc admin uncordon.

Quorum protection

Before cordoning, AIStor validates that the operation does not cause the cluster to lose quorum. If cordoning the node would reduce the cluster below the minimum required nodes for read and write operations, the command fails with an error similar to the following:

cluster would lose quorum