AIStor Tables
AIStor Tables implements native Iceberg tables support in AIStor object storage. This feature allows you to create, manage, and query Iceberg tables directly through AIStor Object Store with no dependencies on external catalog services or metadata databases.
AIStor Tables is compatible with Iceberg Golang, the Iceberg V3 spec, and the Iceberg REST Catalog. Applications can interact with Iceberg entities through the AIStor Iceberg API while performing S3 operations through the object storage API.
The following diagram provides a visual flow of how applications like Starburst, Dremio, Trino, and Spark can use either the Iceberg or S3 APIs to access distinct types of data stored in AIStor.

AIStor introduces support for Tables in minio RELEASE.2026-02-02T23-40-11Z and mc version RELEASE.2026-02-03T00-12-26Z.
See the AIStor Tables developer documentation for more information about integrating with AIStor Tables from your application.
Create resources with AIStor Client
This example uses the AIStor Client to create new AIStor Tables resources.
-
Create a warehouse named
mywarehousein the AIStor clustermyaistor:mc table warehouse create myaistor mywarehouse -
Create a namespace named
mynamespaceinside the warehouse:mc table namespace create myaistor mywarehouse mynamespace -
Create a table named
mytable:mc table create myaistor mywarehouse mynamespace mytable \ --schema '{"type":"struct","fields":[{"id":1,"name":"id","type":"long","required":true},{"id":2,"name":"name","type":"string","required":false}]}'This schema defines two columns:
Column Type Required Description idlong Yes Unique identifier namestring No Product name -
Verify the table was created:
mc table list myaistor mywarehouse mynamespace
See the AIStor Tables developer documentation for an example of inserting and querying data using Python and PyIceberg.
Connect to AIStor Tables with an Iceberg client library
Most Apache Iceberg client libraries support REST catalog endpoints. These sample configurations show how to connect to AIStor Tables with common clients and SigV4 authentication.
Replace the sample values as appropriate for your client and AIStor cluster:
uri: the hostname and port for your clusterwarehouse: the name of your warehouseaccess-key-idandsecret-access-key: access key and secret key for a user with permission to access AIStor Tables
Iceberg clients typically require a base path consisting of the AIStor endpoint and catalog API path (https://aistor.example.net:9000/_iceberg).
Refer to the documentation for your preferred client, library, or application for specific behaviors around endpoint construction.
PyIceberg
from pyiceberg.catalog import load_catalog
catalog = load_catalog(
"aistor",
**{
"uri": "http://localhost:9000/_iceberg",
"warehouse": "analytics",
"rest.sigv4-enabled": "true",
"rest.signing-name": "s3tables",
"rest.signing-region": "local", # a region is required but unused
"s3.access-key-id": "minioadmin",
"s3.secret-access-key": "minioadmin",
"s3.endpoint": "http://localhost:9000"
}
)
Spark
# AIStor settings
spark.conf.set("spark.sql.catalog.aistor", "org.apache.iceberg.spark.SparkCatalog")
spark.conf.set("spark.sql.catalog.aistor.catalog-impl",
"org.apache.iceberg.rest.RESTCatalog")
spark.conf.set("spark.sql.catalog.aistor.type", "rest")
spark.conf.set("spark.sql.catalog.aistor.uri", "http://localhost:9000/_iceberg")
spark.conf.set("spark.sql.catalog.aistor.warehouse", "analytics")
# REST catalog settings
spark.conf.set("spark.sql.catalog.aistor.rest.access-key-id", "minioadmin")
spark.conf.set("spark.sql.catalog.aistor.rest.endpoint", "http://localhost:9000")
spark.conf.set("spark.sql.catalog.aistor.rest.secret-access-key", "minioadmin")
spark.conf.set("spark.sql.catalog.aistor.rest.sigv4-enabled", "true")
spark.conf.set("spark.sql.catalog.aistor.rest.signing-name", "s3tables")
spark.conf.set("spark.sql.catalog.aistor.rest.signing-region", "local")
# Data access settings
spark.conf.set("spark.sql.catalog.aistor.s3.access-key-id", "minioadmin")
spark.conf.set("spark.sql.catalog.aistor.s3.secret-access-key", "minioadmin")
spark.conf.set("spark.sql.catalog.aistor.s3.endpoint", "http://localhost:9000")
spark.conf.set("spark.sql.catalog.aistor.s3.path-style-access", "true")
# Iceberg extensions
spark.conf.set("spark.sql.extensions",
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
# JAR dependencies
spark.conf.set("spark.jars.packages",
"org.apache.iceberg:iceberg-spark-runtime-4.0_2.13:1.10.1," +
"org.apache.iceberg:iceberg-aws-bundle:1.10.1," +
"org.apache.hadoop:hadoop-aws:3.3.4")
# Hadoop S3A settings
spark.conf.set("spark.hadoop.fs.s3a.endpoint", "localhost:9000")
spark.conf.set("spark.hadoop.fs.s3a.access.key", "minioadmin")
spark.conf.set("spark.hadoop.fs.s3a.secret.key", "minioadmin")
spark.conf.set("spark.hadoop.fs.s3a.path.style.access", "true")
spark.conf.set("spark.hadoop.fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem")
spark.conf.set("spark.hadoop.fs.s3a.connection.ssl.enabled", "false")
spark.conf.set("spark.hadoop.fs.s3a.aws.credentials.provider",
"org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider")
Trino
Trino connects using a dynamically created Iceberg catalog:
CREATE CATALOG tutorial_catalog USING iceberg
WITH (
"iceberg.catalog.type" = 'rest',
"iceberg.rest-catalog.uri" = 'http://localhost:9000/_iceberg',
"iceberg.rest-catalog.warehouse" = 'analytics',
"iceberg.rest-catalog.security" = 'SIGV4',
"iceberg.rest-catalog.vended-credentials-enabled" = 'true',
"iceberg.unique-table-location" = 'true',
"iceberg.rest-catalog.signing-name" = 's3tables',
"iceberg.rest-catalog.view-endpoints-enabled" = 'true',
"s3.region" = 'local',
"s3.aws-access-key" = 'minioadmin',
"s3.aws-secret-key" = 'minioadmin',
"s3.endpoint" = 'http://localhost:9000',
"s3.path-style-access" = 'true',
"fs.hadoop.enabled" = 'false',
"fs.native-s3.enabled" = 'true'
);