Deploy AIStor on Kubernetes
This tutorial deploys AIStor onto Kubernetes distributions that follow the upstream API and functionality. The steps in this procedure may work on forked Kubernetes distributions.
This procedure requires the installation of Kubernetes operators and associated resources including CustomResourceDefinitions, StatefulSets, and secrets into new or existing namespaces. You must perform the operations in this procedure as a user that has broad permissions to create resources within multiple namespaces.
Deploy AIStor using Helm
This procedure documents installation on Kubernetes with the AIStor Helm Charts. MinIO recommends Helm version 3.17 or later.
-
Retrieve your license file.
Log in to SUBNET and select the License button in the Deployments view.

The Account License modal
Save the content of the file in a secure location for use in the next step. You need the decoded JWT token value from the license file (beginning with
eyJ...) for installation. -
Add the AIStor Helm repository.
helm repo add minio https://helm.min.io/ -
(Optional) Create a YAML manifest to customize the
minio/aistor-objectstore-operatorchart.If you need to customize the operator deployment beyond the default settings, use your preferred text editor to create a YAML manifest for the chart named
aistor-objectstore-operator-values.yaml. Refer to the AIStor Operator values reference for available customizations. -
Install the chart to the
aistornamespace with your SUBNET license:The
minio/aistor-objectstore-operatorchart contains the necessary Kubernetes resources for deploying AIStor Server resources through theobjectstorechart.helm install aistor minio/aistor-objectstore-operator \ -n aistor --create-namespace \ --set license="eyJhbGciOiJFUzM4NCIsInR..."Replace the license value with your decoded JWT token from SUBNET.
If you created a customization file in the previous step, add it to the command:
helm install aistor minio/aistor-objectstore-operator \ -n aistor --create-namespace \ --set license="eyJhbGciOiJFUzM4NCIsInR..." \ -f aistor-objectstore-operator-values.yamlIf successful, the command outputs a summary of installed resources.
-
Validate the installation by running the following command:
kubectl get all -n aistorThe output should show running pods similar to the following:
NAME READY STATUS RESTARTS AGE pod/adminjob-operator-cfc97d9f-hjbp5 1/1 Running 0 4m16s pod/object-store-operator-78c9f84b85-kmwlv 1/1 Running 0 4m16s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/object-store-operator ClusterIP 10.43.210.230 <none> 4221/TCP 4m16s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/adminjob-operator 1/1 1 1 4m16s deployment.apps/object-store-operator 1/1 1 1 4m16s NAME DESIRED CURRENT READY AGE replicaset.apps/adminjob-operator-cfc97d9f 1 1 1 4m16s replicaset.apps/object-store-operator-78c9f84b85 1 1 1 4m16s -
Configure and deploy the
minio/aistor-objectstorechart.Run the following command to save a copy of the default chart values:
helm show values minio/aistor-objectstore > aistor-objectstore-values.yamlOpen the file with your preferred text editor and modify the values to reflect your deployment. Remove any default or unmodified values such that the file reflects only your changes. The following example has a minimal set of fields for deploying an 8x8 object store using the AIStor Volume Manager for storage provisioning:
objectStore: name: primary-object-store # name of the AIStor Server pools: - name: pool-0 servers: 8 # Number of servers/pods to deploy volumesPerServer: 8 # Number of Persistent Volumes per Server/Pod size: 2Ti # Size of each PV # storageClassName: directpv-min-io # Storage Class assigned to each PV services: minio: serviceType: NodePort nodePort: 31000 # Select an available NodePort in the supported range of 30000-32767The following command deploys an AIStor Object Store with the name and namespace of
primary-object-store.helm install primary-object-store minio/aistor-objectstore \ -n primary-object-store --create-namespace \ -f aistor-objectstore-values.yaml -
Connect to the deployment.
The previous step configures the S3 API services via
NodePortsuch that you can access them through the IP or hostname of any worker node in the Kubernetes cluster. Usekubectl get nodes -o wideor a similar command to determine the appropriate IP/Hostname to use for access.