Troubleshooting Upgrade Issues
This page documents common issues encountered when upgrading from MinIO Community Edition to AIStor and their solutions.
KES root CA certificate changes
The ObjectStore CRD has subtle changes in dealing with CA certificates for KES compared to the MinIO Community Operator.
Problem
The MinIO Community Operator does not support adding CA certificates to the KES container. This often resulted in adding a client certificate that was used as a CA certificate.
Old configuration (MinIO Community Edition)
The following KES configuration illustrates the previous approach:
kes:
replicas: 2
configuration:
address: :7373
admin:
identity: ${MINIO_KES_IDENTITY}
tls:
key: /tmp/kes/server.key # Path to the TLS private key
cert: /tmp/kes/server.crt # Path to the TLS certificate
keystore:
vault:
endpoint: "https://vault.example.com"
engine: "kv"
version: "v2"
approle:
engine: ""
id: "app-role-id"
secret: "app-role-secret"
retry: 15s
tls: # The Vault client TLS configuration for mTLS authentication and certificate verification
key: "" # Path to the TLS client private key for mTLS authentication to Vault
cert: "" # Path to the TLS client certificate for mTLS authentication to Vault
ca: "/tmp/kes/vaultroot-ca.crt" # Path to one or multiple PEM root CA certificates
status: # Vault status configuration. The server will periodically reach out to Vault to check its status.
ping: 10800s # Duration until the server checks Vault's status again.
clientCertSecret:
name: vaultroot-ca
type: opaque
Solution
AIStor does not allow this method.
You must add any CA certificate using the trustedCAs field.
This mounts the CA certificate in the Pod as a system root CA, so KES trusts certificates signed using that CA.
New configuration (AIStor)
In AIStor this configuration should look like this:
kes:
replicas: 2
configuration:
address: :7373
admin:
identity: ${MINIO_KES_IDENTITY}
tls:
key: /tmp/kes/server.key # Path to the TLS private key
cert: /tmp/kes/server.crt # Path to the TLS certificate
keystore:
vault:
endpoint: "https://vault.example.com"
engine: "kv"
version: "v2"
approle:
engine: ""
id: "app-role-id"
secret: "app-role-secret"
retry: 15s
# The entire TLS section can be removed, because CA certificate is trusted as a system root CA
trustedCAs:
name: vaultroot-ca
type: opaque
Certificate naming requirements
AIStor requires certificates to be named public.crt (for type opaque).
Client certificates often used arbitrary names (the example used vaultroot-ca.crt).
To resolve this, edit the secret to rename the certificate field to public.crt:
kubectl edit secret -n $NAMESPACE SECRET_NAME
In the editor that opens, locate the data: section and rename the certificate field from its current name (for example, vaultroot-ca.crt) to public.crt.
Save and exit the editor to apply the changes.
OperatorHub to Helm subpath issues
Problem
When upgrading from OperatorHub to Helm (or vice-versa), the object store appears empty or IAM seems to be reset.
Cause
The default sub-path for Helm is set to data, so all pods mount /export0/data.
The OperatorHub installation does not set this field, so all data is in /export0.
When upgrading from one system to another, the object store fetches data from the incorrect directory. Although root credentials will work, IAM will seem to be reset.
Solution
When upgrading from OperatorHub to Helm, you must explicitly set the subPath to an empty string in your aistor-objectstore-values.yaml:
pools:
- name: pool-0
volumeClaimTemplate:
spec:
# ... other settings
subPath: ''
PolicyBinding ownership errors
Problem
During Helm installation, you receive errors about the policybinding.sts.min.io CRD already existing or being owned by another release.
Cause
Both AIStor and MinIO Community Edition use the policybinding.sts.min.io custom resource.
When upgrading, ownership of this CRD needs to be transferred to the new Helm installation.
Solution
If using Helm 3.17 or later, use the --take-ownership flag:
helm install --take-ownership -n aistor aistor minio/aistor-operator ...
If using an older version of Helm that does not support --take-ownership, manually patch the CRD:
kubectl patch crd policybindings.sts.min.io --type='merge' \
-p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"aistor","meta.helm.sh/release-namespace":"aistor"}}}'
Make sure the release name and namespace match your helm install command.
Upgrade tool cannot connect to cluster
Problem
The upgrade tool fails with connection errors when trying to generate the aistor-objectstore-values.yaml file.
Cause
The upgrade tool needs access to your Kubernetes cluster to read the current Tenant configuration. The current Kubernetes context may not be set correctly, or the kubeconfig may not be accessible.
Solution
-
Verify your current Kubernetes context:
kubectl config current-context -
Switch to the correct context if needed:
kubectl config use-context CONTEXT_NAME -
Ensure the upgrade tool can access your kubeconfig:
docker run --pull=always --rm -v ~/.kube/config:/root/.kube/config \ quay.io/minio/aistor/operator-migration:latest generate-helm \ --namespace $NAMESPACE -
For airgapped environments, ensure you are using the correct registry path:
docker run --pull=always --rm -v ~/.kube/config:/root/.kube/config \ registry.example.local/aistor/operator-migration:latest generate-helm \ --namespace $NAMESPACE
Object store fails to initialize after upgrade
Problem
After upgrading and recreating the object store, the ObjectStore resource does not reach Initialized status.
Cause
This can occur for several reasons:
- Persistent Volume Claims were not preserved correctly
- Configuration secrets are missing or incorrect
- License is invalid or not properly configured
- Resource constraints prevent pods from starting
Solution
-
Check the ObjectStore status:
kubectl -n $NAMESPACE get objectstore kubectl -n $NAMESPACE describe objectstore $OBJECTSTORE_NAME -
Check pod status and logs:
kubectl -n $NAMESPACE get pods kubectl -n $NAMESPACE logs POD_NAME -
Verify Persistent Volume Claims exist:
kubectl -n $NAMESPACE get pvc -
Verify the configuration secret exists and is correct:
kubectl -n $NAMESPACE get secret storage-configuration -o yaml -
Check the license is properly configured in the operator:
kubectl -n aistor get secret minio-license -o yaml
AIStor fails to start after Linux upgrade
Problem
After upgrading from MinIO Community Edition to AIStor on Linux, the AIStor server fails to start or shows errors related to configuration or metadata.
Cause
AIStor may fail to automatically upgrade configurations from very old MinIO versions. The server may be unable to read or parse older configuration formats, bucket metadata, or IAM settings.
Solution
Use the backup files created at the beginning of the upgrade procedure to manually import the configurations:
-
Import the cluster configuration:
mc admin config import ALIAS_NAME < minio-prod-config.txt -
Import bucket metadata:
mc admin cluster bucket import ALIAS_NAME ALIAS_NAME-bucket-metadata.zip -
Import IAM metadata:
mc admin cluster iam import ALIAS_NAME ALIAS_NAME-iam-info.zip
Replace ALIAS_NAME with your cluster’s alias and ensure the filenames match those created during the backup step.
After importing, restart the AIStor service and verify it starts successfully:
mc admin service restart ALIAS_NAME
mc admin info ALIAS_NAME
Additional resources
If you encounter issues not covered here or need additional assistance:
- Review the generated
aistor-objectstore-values.yamlfor TODO items that need attention. - Check the Release Notes for known issues.
- Contact MinIO SUBNET for expert support.
- Create a ticket and upload both
tenant-backup.yamlandaistor-objectstore-values.yamlfor review.