Core Concepts

This page provides an overview of AIStor Server deployment architectures from a production perspective.

A production AIStor Server deployment consists of at least 4 hosts with homogeneous storage and compute resources.

A diagram of a four node distributed AIStor Server deployment with matching compute, storage, and network configurations

Each AIStor Server host in this pool has matching compute, storage, and network configurations

These hosts make up a server pool, where the AIStor Server presents the aggregated compute, memory, and storage as a single resource to clients. Each pool consists of one or more erasure sets

Each AIStor Server has a complete picture of the distributed topology, such that an application can connect and direct operations against any node in the deployment.

Applications typically should not manage those connections, as any changes to the deployment topology would require application updates. Production environments should instead deploy a load balancer or similar network control plane component to manage connections to the AIStor Server deployment.

The load balancer routes the request to any node in the deployment. The receiving node handles any internode requests thereafter.

The load balancer routes the request to any node in the deployment. The receiving node handles any internode requests thereafter.

Client applications can use any S3-compatible SDK or library to interact with the AIStor Server deployment.

Any S3 compatible client can connect to and perform operations against any AIStor Server node

Clients using a variety of S3-compatible SDKs can perform operations against the same AIStor Server deployment.

AIStor provides S3-focused SDKs for multiple languages for developer convenience. These libraries only provide S3 API functionality and do not include extra code for supporting extraneous cloud storage features.

You can expand an AIStor Server’s available storage through pool expansion

Each pool consists of an independent group of nodes with their own erasure sets. Adding new pools requires updating all nodes in the deployment with the new topology. Operations in multi-pool clusters require the receiving AIStor Server node to determine which pool to route a given request.

A diagram of a multi-pool AIStor Server deployment, where a GET operation is routed to the pool containing the requested object.

The PUT request requires checking each pool for the correct erasure set. Once identified, AIStor Server partitions the object and distributes the data and parity shards across the appropriate set.

Pool expansion requires updating any load balancers or similar network control plane components with the new hostname layout to ensure even distribution of load across nodes.