Welcome to the upcoming version of the MinIO Documentation! The content on this page is under active development and may change at any time. If you can't find what you're looking for, check our legacy documentation. Thank you for your patience.

Object Lifecycle Management

MinIO Object Lifecycle Management allows creating rules for time or date based automatic transition or expiry of objects. For object transition, MinIO automatically moves the object to a configured remote storage tier. For object expiry, MinIO automatically deletes the object.

MinIO lifecycle management is built for behavior and syntax compatibility with AWS S3 Lifecycle Management. For example, you can export S3 lifecycle management rules and import them into MinIO or vice-versa. MinIO uses JSON to describe lifecycle management rules, and conversion to or from XML may be required.

Object Transition (“Tiering”)

MinIO supports creating object transition lifecycle management rules, where MinIO can automatically move an object to a remote storage “tier”. MinIO supports any S3-compatible service as a remote tier in addition to the following public cloud storage services:

MinIO object transition supports use cases like moving aged data from MinIO clusters in private or public cloud infrastructure to low-cost private or public cloud storage solutions. MinIO manages retrieving tiered objects on-the-fly without any additional application-side logic.

Use the mc admin tier command to create a remote target for tiering data to a supported Cloud Service Provider object storage. You can then use the mc ilm add --transition-days command to transition objects to the remote tier after a specified number of calendar days.

Exclusive Access to Remote Data

MinIO requires exclusive access to the transitioned data on the remote storage tier. MinIO ignores any objects in the remote bucket or bucket prefix not explicitly managed by the MinIO deployment. Automatic transition and transparent object retrieval depend on the following assumptions:

  • No external mutation, migration, or deletion of objects on the remote storage.

  • No lifecycle management rules (e.g. transition or expiration) on the remote storage bucket.

MinIO stores all transitioned objects in the remote storage bucket or resource under a unique per-deployment prefix value. This value is not intended to support identifying the source deployment from the backend. MinIO supports an additional optional human-readable prefix when configuring the remote target, which may facilitate operations related to diagnostics, maintenance, or disaster recovery.

MinIO recommends specifying this optional prefix for remote storage tiers which contain other data, including transitioned objects from other MinIO deployments. This tutorial includes the necessary syntax for setting this prefix.

Availability of Remote Data

MinIO creates metadata for each transitioned object that identifies its location on the remote storage. This metadata is required for accessing the object, such that applications cannot access a transition object independent of MinIO. Availability of the transitioned data therefore depends on the same core protections that erasure coding and distributed deployment topologies provide for all objects on the MinIO deployment. Using object transition does not provide any additional business continuity or disaster recovery benefits.

Workloads that require BC/DR protections should implement MinIO Server-Side replication. Replication ensures objects remains preserved on the remote replication site, such that you can resynchronize from the remote in the event of partial or total data loss. See Resynchronization (Disaster Recovery) for more complete documentation on using replication to recover after partial or total data loss.

Versioned Buckets

MinIO adopts S3 behavior for transition rules on versioned buckets. Specifically, MinIO by default applies the transition operation to the current object version.

To transition noncurrent object versions, specify the --noncurrentversion-transition-days and --noncurrentversion-transition-storage-class options when creating the transition rule.

Object Expiration

MinIO lifecycle management supports expiring objects on a bucket. Object “expiration” involves performing a DELETE operation on the object. For example, you can create a lifecycle management rule to expire any object older than 365 days.

Use the mc ilm add command with one of the following commandline options to create new expiration rules on a bucket:

For buckets with replication configured, MinIO does not replicate objects deleted by a lifecycle management expiration rule. See Replication of Delete Operations for more information.

Versioned Buckets

MinIO adopts S3 behavior for expiration rules on versioned buckets. MinIO has two specific default behaviors for versioned buckets:

  • MinIO applies the expiration option to only the current object version by creating a DeleteMarker as is normal with versioned delete.

    To expire noncurrent object versions, specify the --noncurrentversion-expiration-days option when creating the expiration rule.

  • MinIO does not expire DeleteMarkers even if no other versions of that object exist.

    To expire delete markers when there are no remaining versions for that object, specify the --expired-object-delete-marker option when creating the expiration rule.

Lifecycle Management Object Scanner

MinIO uses a built-in scanner to actively check objects against all configured lifecycle management rules. The scanner is a low-priority process that yields to high IO workloads to prevent performance spikes triggered by rule timing. The scanner may therefore not detect an object as eligible for a configured transition or expiration lifecycle rule until after the lifecycle rule period has passed.

Delayed application of lifecycle management rules is typically associated to limited node resources and cluster size. Scanner speed tends to slow as clusters grow as more time is required to visit all buckets and objects. This can be exacerbated if the cluster hardware is undersized for regular workloads, as the scanner will yield to high cluster load to avoid performance loss. Consider regularly checking cluster metrics, capacity, and resource usage to ensure the cluster hardware is scaling alongside cluster and workload growth.