Storage Class Reference

This page documents DirectPV storage class configuration, including the default storage class and how to create custom storage classes.

Default storage class

DirectPV installs a default storage class named directpv-min-io.

Definition

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: directpv-min-io
  labels:
    application-name: directpv.min.io
    application-type: CSIDriver
    directpv.min.io/created-by: kubectl-directpv
    directpv.min.io/version: v1beta1
  finalizers:
    - foregroundDeletion
provisioner: directpv-min-io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
allowedTopologies:
  - matchLabelExpressions:
      - key: directpv.min.io/identity
        values:
          - directpv-min-io
parameters:
  csi.storage.k8s.io/fstype: xfs

Parameters

Parameter Value Description
provisioner directpv-min-io The CSI driver that provisions volumes.
reclaimPolicy Delete Persistent volumes are deleted when the PVC is deleted.
volumeBindingMode WaitForFirstConsumer Delays PV binding until a pod using the PVC is created. Ensures topology-aware scheduling.
allowVolumeExpansion true Online volume expansion supported without pod restart.
allowedTopologies Identity selector Restricts topology to nodes with directpv.min.io/identity: directpv-min-io label.
csi.storage.k8s.io/fstype xfs Filesystem type. DirectPV only supports XFS.

Volume binding mode

DirectPV uses WaitForFirstConsumer volume binding mode. This mode delays volume binding and provisioning until a pod using the PersistentVolumeClaim is created.

This behavior ensures:

  • PersistentVolumes are provisioned on nodes matching the pod’s scheduling constraints.
  • Topology-aware scheduling based on resource requirements, node selectors, pod affinity/anti-affinity, and taints/tolerations.
  • Optimal data locality for direct-attached storage.

Custom storage classes

Create custom storage classes to control drive selection using labels.

Create a custom storage class

Use the create-storage-class.sh script:

create-storage-class.sh <NAME> <DRIVE-LABELS>...
Argument Description
NAME New storage class name.
DRIVE-LABELS One or more drive labels in format directpv.min.io/key: value.

Examples

Create a fast tier storage class:

create-storage-class.sh fast-tier-storage 'directpv.min.io/tier: fast'

Create a thick provisioning storage class:

create-storage-class.sh directpv-thick-provisioning 'directpv.min.io/skip-thin-provisioning: "true"'

Create a storage class with multiple labels:

create-storage-class.sh nvme-hot-storage 'directpv.min.io/tier: hot' 'directpv.min.io/drive-type: nvme'

Use custom storage class in PVC

Reference the custom storage class in your PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  volumeMode: Filesystem
  storageClassName: fast-tier-storage
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 8Mi

Drive selection parameters

Custom storage classes use parameters to select specific drives.

Available parameters

Parameter Description
directpv.min.io/access-tier Select drives by access tier (Default, Warm, Hot, Cold).
directpv.min.io/skip-thin-provisioning Set to "true" to disable thin provisioning.
Custom labels Any custom label applied to drives using kubectl directpv label drives.

Access tiers

Label drives with access tiers to categorize by performance:

Tier Description
Default Standard access tier.
Warm Warm storage tier.
Hot High-performance storage tier.
Cold Archival storage tier.

Label drives with an access tier:

kubectl directpv label drives --drives=nvme1n1 directpv.min.io/access-tier=hot

Create storage class for hot tier:

create-storage-class.sh hot-tier-storage 'directpv.min.io/access-tier: hot'

Thin vs thick provisioning

By default, DirectPV uses thin provisioning. Volumes are provisioned based on the requested size, regardless of the drive’s actual free capacity.

To use thick provisioning, create a storage class with skip-thin-provisioning:

create-storage-class.sh thick-storage 'directpv.min.io/skip-thin-provisioning: "true"'

With thick provisioning, volumes are provisioned based on the actual free capacity available on drives.

Drive selection workflow

When a pod requests a PersistentVolume, DirectPV selects a drive using this process:

  1. Validate the filesystem type is xfs.
  2. Validate the access tier if specified.
  3. Check for existing volume in DirectPVDrive CRD objects. If found, schedule the first drive containing the volume.
  4. If no existing volume, filter drives by:
    • Requested capacity
    • Access tier (if requested)
    • Topology constraints (if requested)
    • Custom labels (if using custom storage class)
  5. If multiple drives match, select the drive with the greatest free capacity.
  6. If multiple drives have equal free capacity, select one randomly.

If no drives match the criteria, DirectPV returns an error and Kubernetes retries the request.

Reserved label keys

The following label keys are reserved and cannot be used for custom drive labels:

  • directpv.min.io/node
  • directpv.min.io/drive-name
  • directpv.min.io/drive
  • directpv.min.io/access-tier
  • directpv.min.io/version
  • directpv.min.io/created-by
  • directpv.min.io/pod.name
  • directpv.min.io/pod.namespace
  • directpv.min.io/pod.statefulset
  • directpv.min.io/identity
  • directpv.min.io/rack
  • directpv.min.io/zone
  • directpv.min.io/region
  • directpv.min.io/migrated
  • directpv.min.io/request-id
  • directpv.min.io/suspend
  • directpv.min.io/image-tag
  • directpv.min.io/skip-thin-provisioning

CSI driver specification

Property Value
CSI driver name directpv-min-io
fsGroupPolicy ReadWriteOnceWithFSType
requiresRepublish false
podInfoOnMount true
attachRequired false
storageCapacity false
Supported modes Persistent, Ephemeral