AIStor Tables API Reference
AIStor Tables implements native Apache Iceberg tables support directly within AIStor object storage, eliminating dependencies on external catalog services or metadata databases. The AIStor Tables API provides a RESTful interface compatible with the Iceberg REST Catalog specification.
All API requests use the following base path:
http://example.net:9000/_iceberg/v1
Replace example.net with your AIStor server hostname or IP address.
Authentication
All requests require AWS Signature Version 4 (SigV4) authentication with the service name s3tables.
Requests must include standard AWS SigV4 headers:
Authorization- AWS SigV4 signatureX-Amz-Date- Request timestampX-Amz-Content-SHA256- Payload hash
Example authentication flow
To authenticate a request:
- Create an HTTP request with method, URL, headers, and body.
- Calculate the payload hash (SHA256 of request body).
- Generate the SigV4 signature using your access key and secret key.
- Add the signature to the
Authorizationheader.
Most AWS SDKs and libraries provide built-in SigV4 signing functionality.
For example, the following Python code uses boto3and botocore:
from botocore.auth import SigV4Auth
from botocore.awsrequest import AWSRequest
import boto3
session = boto3.Session(
aws_access_key_id='your-access-key',
aws_secret_access_key='your-secret-key'
)
request = AWSRequest(
method='POST',
url='http://localhost:9000/_iceberg/v1/warehouses',
data='{"name":"analytics"}',
headers={'Content-Type': 'application/json'}
)
SigV4Auth(session.get_credentials(), 's3', 'region').add_auth(request)
Warehouse operations
Warehouses serve as the root container for all tables and namespaces.
Create warehouse
Create a new warehouse for storing tables and namespaces.
POST /_iceberg/v1/warehouses
Request body:
{
"name": "analytics",
"upgrade-existing": false
}
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Warehouse name (3-63 chars, lowercase/numbers/hyphens). |
upgrade-existing |
boolean | No | Allow upgrading existing bucket to warehouse (default: false). |
Response: 200 OK
{
"name": "analytics"
}
Action: s3tables:CreateWarehouse
List warehouses
Return all warehouses accessible to the authenticated user or service.
GET /_iceberg/v1/warehouses
Query parameters:
| Parameter | Type | Description |
|---|---|---|
pageToken |
string | Pagination token from previous response. |
pageSize |
integer | Maximum number of results to return. |
Response: 200 OK
{
"warehouses": ["analytics", "dev", "staging"],
"next-page-token": "token-for-next-page"
}
Action: s3tables:ListWarehouses
Get warehouse
Retrieve metadata for a specific warehouse.
GET /_iceberg/v1/warehouses/{warehouse}
Response: 200 OK
{
"name": "analytics",
"bucket": "mybucket",
"uuid": "9c12d441-03fe-4693-9a96-a0705ddf69c1",
"created-at": "2025-10-22T10:30:00Z",
"properties": {
"owner": "data-team",
"description": "Data science tables",
"environment": "production"
}
}
Action: s3tables:GetWarehouse
Resource: arn:aws:s3tables:::bucket/{warehouse}
Delete warehouse
Remove a warehouse. The warehouse must be empty (no namespaces) before deletion.
DELETE /_iceberg/v1/warehouses/{warehouse}
Query parameters:
| Parameter | Type | Description |
|---|---|---|
preserve-bucket |
boolean | Keep underlying storage bucket (default: false). |
Response: 204 No Content
Action: s3tables:DeleteWarehouse
Resource: arn:aws:s3tables:::bucket/{warehouse}
Namespace operations
Namespaces organize tables within a warehouse and support custom properties.
Create namespace
Create a new namespace within a warehouse.
POST /_iceberg/v1/{warehouse}/namespaces
Request body:
{
"namespace": ["data_science"],
"properties": {
"owner": "data-team",
"description": "Data science tables",
"environment": "production"
}
}
| Field | Type | Required | Description |
|---|---|---|---|
namespace |
array[string] | Yes | Array with one or more namespace names (max 10). |
properties |
object | No | Key-value properties (each key/value max 2KB). |
Response: 200 OK
{
"namespace": ["data_science"],
"properties": {
"owner": "data-team",
"description": "Data science tables",
"environment": "production"
}
}
Action: s3tables:CreateNamespace
Resource: arn:aws:s3tables:::bucket/{warehouse}
List namespaces
Return all namespaces in a warehouse.
GET /_iceberg/v1/{warehouse}/namespaces
Query parameters:
| Parameter | Type | Description |
|---|---|---|
pageToken |
string | Pagination token from previous response. |
pageSize |
integer | Maximum number of results to return. |
parent |
string | Parent namespace for hierarchical listing. |
Response: 200 OK
{
"namespaces": [
["data_science"],
["engineering"],
["marketing"]
],
"next-page-token": "token-for-next-page"
}
Action: s3tables:ListNamespaces
Resource: arn:aws:s3tables:::bucket/{warehouse}
Get namespace
Retrieve namespace properties.
GET /_iceberg/v1/{warehouse}/namespaces/{namespace}
Response: 200 OK
{
"namespace": ["data_science"],
"properties": {
"owner": "data-team",
"description": "Data science tables"
}
}
Action: s3tables:GetNamespace
Resource: arn:aws:s3tables:::bucket/{warehouse}
Update namespace properties
Update or add namespace properties.
POST /_iceberg/v1/{warehouse}/namespaces/{namespace}/properties
Request body:
{
"updates": {
"owner": "new-team",
"description": "Updated description"
},
"removals": ["environment"]
}
Response: 200 OK
{
"updated": ["owner", "description"],
"removed": ["environment"],
"missing": []
}
Action: s3tables:UpdateNamespace
Delete namespace
Remove a namespace. The namespace must be empty (no tables) before deletion.
DELETE /_iceberg/v1/{warehouse}/namespaces/{namespace}
Response: 204 No Content
Action: s3tables:DeleteNamespace
Resource: arn:aws:s3tables:::bucket/{warehouse}
Table operations
Tables are Apache Iceberg tables with schema, partitioning, and transaction support.
Create table
Create a new table with specified schema and optional partitioning.
POST /_iceberg/v1/{warehouse}/namespaces/{namespace}/tables
Request body:
{
"name": "orders",
"schema": {
"type": "struct",
"fields": [
{
"id": 1,
"name": "order_id",
"type": "long",
"required": true
},
{
"id": 2,
"name": "customer_id",
"type": "long",
"required": true
},
{
"id": 3,
"name": "order_date",
"type": "date",
"required": true
},
{
"id": 4,
"name": "amount",
"type": "decimal(10,2)",
"required": true
}
]
},
"partition-spec": [
{
"name": "order_date_year",
"transform": "year",
"source-id": 3,
"field-id": 1000
}
],
"properties": {
"owner": "orders-team",
"description": "Order transactions"
}
}
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Table name (1-250 chars, lowercase/numbers/underscores) |
schema |
object | Yes | Iceberg schema with field definitions |
partition-spec |
array | No | Partition specification (default: unpartitioned) |
write-order |
object | No | Sort order for data files |
properties |
object | No | Table properties (max 2KB each) |
stage-create |
boolean | No | Create staged table for atomic commits |
Restrictions:
- AIStor manages table locations, you cannot specify a custom
location. - Properties cannot begin with
write.data.path. - Most
write.metadata.*properties are not supported.
Response: 200 OK
{
"metadata": {
"format-version": 2,
"table-uuid": "9c12d441-03fe-4693-9a96-a0705ddf69c1",
"location": "s3://analytics/data_science/orders",
"current-schema-id": 0,
"schemas": [...],
"partition-specs": [...],
"properties": {...}
},
"metadata-location": "s3://analytics/.aistor-tables/data_science/orders/metadata/v1.metadata.json",
"config": {}
}
Action: s3tables:CreateTable
Resource: arn:aws:s3tables:::bucket/{warehouse}/table/*
Conditions:
s3tables:namespace- Restrict by namespace names3tables:tableName- Restrict by table name
List tables
Return all tables in a namespace.
GET /_iceberg/v1/{warehouse}/namespaces/{namespace}/tables
Query parameters:
| Parameter | Type | Description |
|---|---|---|
pageToken |
string | Pagination token from previous response. |
pageSize |
integer | Maximum number of results to return. |
Response: 200 OK
{
"identifiers": [
{
"namespace": ["data_science"],
"name": "orders"
},
{
"namespace": ["data_science"],
"name": "customers"
}
],
"next-page-token": "token-for-next-page"
}
Action: s3tables:ListTables
Get table metadata
Retrieve complete table metadata including schema, partitioning, and snapshots.
GET /_iceberg/v1/{warehouse}/namespaces/{namespace}/tables/{table}
Response: 200 OK
{
"metadata": {
"format-version": 2,
"table-uuid": "9c12d441-03fe-4693-9a96-a0705ddf69c1",
"location": "s3://analytics/data_science/orders",
"last-updated-ms": 1698854400000,
"current-schema-id": 0,
"schemas": [...],
"current-snapshot-id": 3051729675574597004,
"snapshots": [...],
"partition-specs": [...],
"properties": {...}
},
"metadata-location": "s3://analytics/.aistor-tables/data_science/orders/metadata/v3.metadata.json"
}
Action: s3tables:GetTable
Resource: arn:aws:s3tables:::bucket/{warehouse}/table/*
You can also use a HEAD request to check if a table exists without returning metadata.
HEAD /_iceberg/v1/{warehouse}/namespaces/{namespace}/tables/{table}
Response:
200 OK- Table exists404 Not Found- Table does not exist
Commit table changes
Atomically commit changes to a table using optimistic concurrency control.
POST /_iceberg/v1/{warehouse}/namespaces/{namespace}/tables/{table}
Request body:
{
"identifier": {
"namespace": ["data_science"],
"name": "orders"
},
"requirements": [
{
"type": "assert-table-uuid",
"uuid": "9c12d441-03fe-4693-9a96-a0705ddf69c1"
},
{
"type": "assert-last-assigned-field-id",
"last-assigned-field-id": 4
}
],
"updates": [
{
"action": "append",
"manifest-list": "s3://analytics/data_science/orders/metadata/snap-3051729675574597004.avro"
}
]
}
Commit requirements:
Requirements validate preconditions before applying updates. Common types include:
| Type | Description |
|---|---|
assert-table-uuid |
Verify table UUID matches expected value. |
assert-ref-snapshot-id |
Verify branch/tag points to expected snapshot. |
assert-last-assigned-field-id |
Verify schema field ID counter. |
assert-current-schema-id |
Verify active schema version. |
assert-last-assigned-partition-id |
Verify partition spec ID counter. |
Commit updates:
Updates modify table state atomically. Common actions include:
| Action | Description |
|---|---|
append |
Add new data files via manifest list. |
set-properties |
Update table properties. |
remove-properties |
Delete table properties. |
upgrade-format-version |
Upgrade to newer Iceberg format. |
add-schema |
Register new schema version. |
set-current-schema |
Change active schema. |
add-snapshot |
Add new snapshot to table. |
set-snapshot-ref |
Update branch or tag reference. |
Response: 200 OK
{
"metadata": {...},
"metadata-location": "s3://analytics/.aistor-tables/data_science/orders/metadata/v4.metadata.json"
}
Action: s3tables:UpdateTable
Rename table
Move a table to a different namespace or change its name.
POST /_iceberg/v1/{warehouse}/tables/rename
Request body:
{
"source": {
"namespace": ["data_science"],
"name": "orders"
},
"destination": {
"namespace": ["analytics"],
"name": "order_history"
}
}
Response: 204 No Content
Action: s3tables:RenameTable
Delete table
Remove a table from the catalog.
DELETE /_iceberg/v1/{warehouse}/namespaces/{namespace}/tables/{table}
Query parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
purgeRequested |
boolean | true | Delete metadata and data files. |
Response: 204 No Content
Purge Behavior:
purgeRequested=true(default) - Remove both catalog metadata and table data files.purgeRequested=false- Remove only catalog entry, preserving data files.
Action: s3tables:DeleteTable
Resource: arn:aws:s3tables:::bucket/{warehouse}/table/*
Advanced operations
Multi-table transactions
Commit changes to multiple tables atomically.
POST /_iceberg/v1/{warehouse}/transactions/commit
Request body:
{
"table-changes": [
{
"identifier": {
"namespace": ["data_science"],
"name": "orders"
},
"requirements": [...],
"updates": [...]
},
{
"identifier": {
"namespace": ["data_science"],
"name": "customers"
},
"requirements": [...],
"updates": [...]
}
]
}
Response: 200 OK
The transaction succeeds only if all table commits succeed. If any table commit fails, the entire transaction is rolled back.
Get catalog configuration
Retrieve catalog-level configuration and capabilities.
GET /_iceberg/v1/{warehouse}/config?warehouse={warehouse}
Response: 200 OK
{
"defaults": {
"s3.endpoint": "http://localhost:9000"
},
"overrides": {}
}
Additional information
Error responses
All errors return JSON responses with the following structure:
{
"error": {
"code": 409,
"type": "IcebergTableAlreadyExists",
"message": "The specified table already exists."
}
}
| Field | Type | Description |
|---|---|---|
code |
integer | HTTP status code. |
type |
string | Error type identifier for programmatic handling. |
message |
string | Human-readable error description. |
Common error types:
| HTTP Status | Error Type | Description |
|---|---|---|
| 400 | BadRequest |
Invalid request format or parameters. |
| 404 | IcebergTableNotFound |
Specified table does not exist. |
| 404 | IcebergNamespaceNotFound |
Specified namespace does not exist. |
| 404 | IcebergWarehouseNotFound |
Specified warehouse does not exist. |
| 409 | IcebergTableAlreadyExists |
Table with this name already exists. |
| 409 | IcebergNamespaceAlreadyExists |
Namespace with this name already exists. |
| 409 | IcebergWarehouseAlreadyExists |
Warehouse with this name already exists. |
| 409 | CommitFailedException |
Table commit failed due to conflict or lock. |
| 409 | IcebergNamespaceNotEmptyError |
Cannot delete namespace containing tables. |
| 409 | IcebergWarehouseNotEmpty |
Cannot delete warehouse containing namespaces. |
| 500 | InternalError |
Internal server error occurred. |
| 501 | IcebergPurgeNotSupported |
Purge operation failed. |
| 503 | TableRecoveryInProgress |
Table is recovering from failed transaction. |
Naming constraints
Entity names must follow these rules:
| Entity | Length | Allowed Characters | Notes |
|---|---|---|---|
| Warehouse | 3-63 chars | Lowercase letters, numbers, hyphens | Cannot contain periods. |
| Namespace | 1-250 chars | Lowercase letters, numbers, underscores | May be multilevel, max 10. |
| Table | 1-250 chars | Lowercase letters, numbers, underscores |
Additional constraints:
- Multi-level namespaces: Maximum 10 nested namespaces.
- Custom table locations: Not allowed (AIStor manages locations).
- Property size: Each property key and value limited to 2KB.
Rate limits
AIStor Tables does not impose hard rate limits but implements best-effort concurrency control:
- Concurrent commits to the same table use optimistic locking.
- Failed commits due to conflicts should be retried with exponential backoff.
- Maximum transaction timeout is configurable per deployment.