Reference
Technical reference documentation and quick lookup resources for Warp.
Command-line reference
The warp CLI reference provides complete documentation for all commands, flags, and options. Use this reference when you need detailed information about command syntax and parameters.
Quick access to frequently used commands:
- warp put - Upload performance tests
- warp get - Download performance tests
- warp mixed - Mixed workload tests
- warp analyze - Analyze test results
Performance metrics reference
Throughput metrics
Warp reports throughput in multiple formats depending on the test type.
| Metric | Unit | Description |
|---|---|---|
| Operations per second (ops/s) | count/s | Number of S3 operations completed per second |
| Bandwidth | MB/s | Data transfer rate for operations that move data |
| Objects per second | objects/s | Object creation or access rate |
Latency metrics
Latency measurements show response time distribution across all operations.
| Percentile | What it measures | Why it matters |
|---|---|---|
| p50 (median) | 50% of operations complete within this time | Represents typical performance |
| p90 | 90% of operations complete within this time | Shows performance for most operations |
| p99 | 99% of operations complete within this time | Reveals tail latency affecting some operations |
| p99.9 | 99.9% of operations complete within this time | Identifies worst-case performance outliers |
First-byte latency
First-byte latency measures the time from request initiation to receiving the first byte of response. This metric reveals storage system responsiveness before data transfer begins.
Low first-byte latency indicates efficient request processing and minimal network overhead. High first-byte latency may indicate network issues, storage system load, or processing bottlenecks.
First-byte latency appears in GET operation tests and shows time-to-first-byte (TTFB) characteristics.
Benchmark data file format
Warp saves benchmark data in CSV format with optional zstd compression. These files contain operation-level details for subsequent analysis and comparison.
File naming convention
Warp generates benchmark data files with timestamps by default.
Format: warp-<benchmark>-<timestamp>.csv.zst
Example: warp-mixed-2025-10-09-143022.csv.zst
Data fields
Each row in the benchmark data file represents a single operation.
| Field | Type | Description |
|---|---|---|
| Operation | string | Operation type (GET, PUT, DELETE, STAT) |
| Thread | integer | Thread identifier for the operation |
| Host | string | Host name or client identifier |
| Start Time | timestamp | Operation start time |
| Duration | duration | Operation completion time |
| Size | integer | Object size in bytes |
| Error | string | Error message if operation failed |
Working with benchmark files
Analyze benchmark files using the analyze command:
warp analyze benchmark-data.csv.zst
Compare two benchmark runs:
warp cmp before.csv.zst after.csv.zst
Merge multiple benchmark runs:
warp merge client1.csv.zst client2.csv.zst client3.csv.zst
Environment variables
Warp supports environment variables for common connection parameters. Set these variables to avoid repeating connection details in every command.
| Variable | Purpose | Example |
|---|---|---|
WARP_HOST |
S3 endpoint address | myaistor.example.com:9000 |
WARP_ACCESS_KEY |
S3 access key | AKIAIOSFODNN7EXAMPLE |
WARP_SECRET_KEY |
S3 secret key | wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
WARP_REGION |
S3 region | us-east-1 |
WARP_TLS |
Enable TLS | true or false |
WARP_KTLS |
Enable Kernel TLS | true or false |
WARP_INFLUXDB_CONNECT |
InfluxDB connection string | http://token@host:port/bucket/org |
Set environment variables in your shell:
export WARP_HOST=myaistor.example.com:9000
export WARP_ACCESS_KEY=ACCESS_KEY
export WARP_SECRET_KEY=SECRET_KEY
export WARP_TLS=true
Then run tests without specifying connection parameters:
warp mixed
Object size notation
Warp accepts object sizes in multiple formats. All size suffixes use binary (base-2) units.
| Notation | Bytes | Common use |
|---|---|---|
1KiB or 1KB |
1,024 | Small objects, metadata-heavy workloads |
1MiB or 1MB |
1,048,576 | Medium objects, balanced workloads |
1GiB or 1GB |
1,073,741,824 | Large objects, throughput testing |
500 |
500 bytes | Exact size specification |
Multiple sizes for randomization:
warp put --obj.size 1KiB,100KiB,1MiB,10MiB --obj.randsize --access-key YOUR_ACCESS_KEY --secret-key YOUR_SECRET_KEY
Duration notation
Specify test duration using time suffixes.
| Notation | Duration |
|---|---|
30s |
30 seconds |
5m |
5 minutes |
1h |
1 hour |
90s |
90 seconds (1.5 minutes) |
Combine values for precise durations:
warp mixed --duration 5m30s --access-key YOUR_ACCESS_KEY --secret-key YOUR_SECRET_KEY
Exit codes
Warp returns standard exit codes to indicate execution status.
| Code | Meaning | Description |
|---|---|---|
| 0 | Success | Test completed without errors |
| 1 | Error | Test encountered errors or failed to complete |
| 2 | Usage error | Invalid command-line syntax or parameters |
Check exit codes in scripts:
warp put --duration 5m --access-key YOUR_ACCESS_KEY --secret-key YOUR_SECRET_KEY
if [ $? -eq 0 ]; then
echo "Test completed successfully"
fi
S3 compatibility
Warp works with S3-compatible storage systems. The tool uses standard S3 API operations that any compatible system should support.
Tested storage systems
Warp has been tested with these S3-compatible storage systems:
- MinIO (all versions)
- Amazon S3
- Google Cloud Storage (S3 compatibility mode)
- Azure Blob Storage (S3 compatibility mode)
- Other systems implementing AWS S3 API
Required S3 operations
Warp tests use these S3 API operations.
Core operations:
- PutObject / PostObject
- GetObject
- DeleteObject
- DeleteObjects (batch delete)
- HeadObject / StatObject
- ListObjectsV2
Advanced features:
- CreateMultipartUpload / UploadPart / CompleteMultipartUpload
- GetObjectVersions (for versioned tests)
- PutObjectRetention / PutObjectLegalHold (for retention tests)
MinIO-specific features
Some tests use MinIO-specific extensions that require MinIO servers.
| Benchmark | MinIO Extension | Purpose |
|---|---|---|
| append | S3 Express One Zone append | Incremental object growth operations |
| zip | MinIO s3zip | ZIP file access without extraction |
| snowball | TAR archive extraction | Bulk object import from TAR files |
These tests will not work with other S3 implementations.
Glossary
Test terms
Distributed benchmarking Coordinating multiple Warp client instances to generate aggregate load from different hosts. The coordinator sends test instructions to all clients and merges their results. For complete documentation, see Distributed benchmarking.
Preparation phase Initial phase where tests upload objects to populate test data before measuring performance. Preparation time does not count toward performance measurements.
Measurement phase Phase where tests execute target operations and collect performance data. The test duration controls how long this phase runs.
Storage terms
Erasure coding Data protection technique that splits objects into data and parity blocks distributed across drives. Provides durability with less storage overhead than replication.
First-byte latency Time from request initiation to receiving the first byte of response data. Reveals network and processing overhead before data transfer begins.
Multipart upload S3 upload method that splits large objects into parts uploaded independently. Improves reliability and enables parallel uploads for better performance.
Object versioning S3 feature that preserves multiple variants of each object. Every modification creates a new version rather than overwriting the existing object.
S3 Express One Zone AWS storage class optimizing for single-digit millisecond latency. Supports unique operations like append that standard S3 does not provide.
Performance terms
Latency percentile Statistical measure showing response time at a specific percentage of operations. p99 means 99% of operations completed within the reported time.
Operations per second Count of completed S3 operations per second regardless of object size. Represents request processing rate rather than data throughput.
Tail latency Performance of the slowest operations typically measured at p99 or p99.9. High tail latency indicates some operations experience significantly worse performance.
Throughput Rate of data transfer measured in MB/s for operations that move data. Represents bandwidth utilization for uploads and downloads.
Technical terms
CRC merging Checksum calculation technique allowing incremental updates without recalculating entire object checksums. Required for append operations to efficiently validate data integrity.
Trailing headers HTTP headers sent after the request or response body completes. Used for checksums calculated during data transfer rather than before.
WebSocket Protocol providing persistent bidirectional communication between coordinator and clients. Enables real-time test coordination for distributed testing.