AIStor Table Sharing
AIStor Table Sharing is a native implementation of the DataBricks Delta Sharing Protocol, allowing organizations to securely connect exabytes of on-premises and private-cloud data with DataBricks and compatible client applications.
AIStor Table Sharing provides read-only access to stored data. You must use standard S3 protocols or compatible libraries to create and manage the Delta tables within AIStor. Client applications which support the delta protocol such as Spark, Trino/Presto, and PowerBI can connect to AIStor table shares and perform operations on the stored data using existing libraries and code.
AIStor Table Sharing is complementary to AIStor Tables. AIStor Tables support creation of Iceberg-formatted data warehouses using Iceberg catalog APIs and semantics. AIStor Table Sharing implements the Delta Universal Format, allowing data administrators to share and query both Delta and Iceberg-formatted data stored in AIStor buckets. See Mixed Table Sharing for more complete guidance on sharing Iceberg and Delta-formatted tables.
Update to at least AIStor RELEASE.2026-02-02T23-40-11Z and MinIO Client RELEASE.2026-02-19T10-32-25Z to access Delta Sharing features.
Quickstart
The following procedure creates a simple Delta table and associated share within AIStor. At the conclusion of the procedure, you can use the example code to execute Delta Table queries against the stored data using the Delta Sharing protocol.
This procedure assumes a local AIStor deployment accessible via localhost:9000 from a host machine with Python installed.
Your Python installation must include the following libraries:
pip install minio deltalake delta-sharing pandas pyarrow
1. Create the bucket and delta table
The following Python code creates a new bucket analytics and creates a Delta table employees in that bucket:
#!/usr/bin/env python3
"""Create a bucket and Delta Lake table in AIStor."""
import pandas as pd
from deltalake import write_deltalake
from minio import Minio
# Connection settings
ENDPOINT = "localhost:9000"
ACCESS_KEY = "minioadmin"
SECRET_KEY = "minioadmin"
# Create bucket
client = Minio(ENDPOINT, access_key=ACCESS_KEY, secret_key=SECRET_KEY, secure=False)
if not client.bucket_exists("analytics"):
client.make_bucket("analytics")
# Sample data
data = {
"id": [1, 2, 3, 4, 5],
"name": ["Alice", "Bob", "Carol", "David", "Eve"],
"department": ["Engineering", "Sales", "Engineering", "Marketing", "Sales"],
"salary": [95000, 65000, 110000, 72000, 68000],
}
# Write Delta table
write_deltalake(
"s3://analytics/employees/",
pd.DataFrame(data),
mode="overwrite",
storage_options={
"AWS_ACCESS_KEY_ID": ACCESS_KEY,
"AWS_SECRET_ACCESS_KEY": SECRET_KEY,
"AWS_ENDPOINT_URL": f"http://{ENDPOINT}",
"AWS_ALLOW_HTTP": "true",
},
)
print("Delta table created at s3://analytics/employees/")
2. Create the share
Use the mc table share create command to create a new table share.
The following example creates a new share analytics-share with the schema HR and table name employees on the myaistor cluster:
mc table share create myaistor/analytics-share HR \
"employees:delta:analytics:employees/" \
--description "Analytics for current employees"
The share string uses the following format when describing a Delta Table share:
TABLE_NAME:delta:BUCKET:PATH/TO/TABLE
3. Create a share token
Delta Sharing uses tokens for authenticating to the share source.
Use the mc table share token create command to create a token against analytics-share.
The example command pipes the output through jq and saves the output as profile.share:
mc table share token create myaistor/analytics-share \
--expires 90d \
--description "Access to analytics share for HR-associated tables" \
--json | jq -r '.profile' > profile.share
The profile.share file provides a method for DataBricks and compatible clients to read the token and access the configured share.
For DataBricks deployments, you would use this file when configuring access in the open sharing model.
4. Consume the data using Delta Sharing
The following Python file uses the delta sharing protocol to access the stored table data and run simple queries:
#!/usr/bin/env python3
"""Read a Delta table via Delta Sharing."""
import delta_sharing
# Specify the path to the `profile.share` created in the previous step
PROFILE = "profile.share"
# Specify the token share name, schema, and table being read
TABLE = f"{PROFILE}#analytics-share.HR.employees"
# List available shares
client = delta_sharing.SharingClient(PROFILE)
print("Available shares:")
for share in client.list_shares():
print(f" - {share.name}")
# List tables
print("\nAvailable tables:")
for table in client.list_all_tables():
print(f" - {table.share}.{table.schema}.{table.name}")
# Load as DataFrame
print("\nEmployee data:")
df = delta_sharing.load_as_pandas(TABLE)
print(df)
# Query with filter (client-side)
print("\nEngineering department:")
eng_df = df[df["department"] == "Engineering"]
print(eng_df)
Interaction with Lifecycle Management
AIStor ILM expiration and transition rules operate independently of any configured table share. As ILM rules cannot read nor recognize Delta or Iceberg data, their execution can move or delete files that the tables rely on for processing read/write operations. Exclude table shares from lifecycle rules to avoid inadvertent corruption of table data and associated metadata.
Unsupported Features
AIStor does not support the following features on buckets or data paths configured for sharing:
- Replication, including site, bucket, and batch
- SSE-C (Server-Side Encryption with Client-Managed Keys)