warp analyze

Analyze saved benchmark data files to extract detailed performance statistics and identify patterns.

Synopsis

Parameters

FILE

The path to a benchmark data file saved during a previous Warp test run. Warp saves benchmark data in CSV format with optional zstd compression. The file contains operation timing, throughput, and error information from the benchmark.

This parameter is required unless reading from stdin.

Flags

Analysis control

--analyze.dur

Default: automatic duration selection

Split the analysis into time-based segments of the specified duration. The default behavior selects an appropriate duration automatically based on the total benchmark time. Use this flag to examine performance variations over time within a single benchmark run.

Valid duration formats include 1s, 5s, 30s, 1m, 5m.

Example:

warp analyze benchmark-data.csv.zst --analyze.dur 30s

--analyze.op

Default: all operation types

Filter the analysis to include only the specified operation type. The default behavior includes all operation types in the analysis. Use this flag to examine performance of specific operations like GET or PUT.

Valid operation types include GET, PUT, DELETE, and operation-specific values like STAT.

Example:

warp analyze benchmark-data.csv.zst --analyze.op GET

--analyze.host

Default: all hosts

Filter the analysis to include only operations from the specified host. The default behavior includes operations from all hosts in distributed benchmarks. Use this flag to examine performance differences between individual client hosts.

Specify the host using the hostname or address that appears in the benchmark data.

Example:

warp analyze benchmark-data.csv.zst --analyze.host client-1.example.com

--analyze.skip

Default: 0s

Skip the specified duration at the beginning of the benchmark data. The default behavior includes all recorded operations in the analysis. Use this flag to exclude warmup periods or initial stabilization phases from analysis.

Valid duration formats include 30s, 1m, 5m.

Example:

warp analyze benchmark-data.csv.zst --analyze.skip 1m

--analyze.v

Default: false

Display additional analysis details in the output. The default behavior shows standard performance metrics. Use this flag to see more detailed breakdowns and diagnostic information.

Example:

warp analyze benchmark-data.csv.zst --analyze.v

Output control

--analyze.out

Default: console output only

Write aggregated analysis data to the specified output file. The default behavior displays results only to the console. Use this flag to save processed analysis data for further processing or comparison.

Specify a filename with an appropriate extension for the output format.

Example:

warp analyze benchmark-data.csv.zst --analyze.out summary.csv

--full

Default: false (aggregates data)

Record full detailed statistics for every individual operation. The default behavior aggregates operations to reduce output size. Use this flag when you need complete per-operation data for detailed analysis.

This flag significantly increases output size and processing time for large benchmark files.

Example:

warp analyze benchmark-data.csv.zst --full

--json

Default: false (formatted console output)

Output analysis results in JSON format. The default behavior displays formatted console output with colors. Use this flag when processing analysis results with automated tools or scripts.

Example:

warp analyze benchmark-data.csv.zst --json

Analysis output

The analyze command provides comprehensive performance statistics.

The output includes overall throughput measured in operations per second and bandwidth in MB/s. Latency percentiles show response time distribution at p50, p90, p99, and p99.9 percentile levels. Error rates display the count and percentage of failed operations by error type.

For time-segmented analysis, the output includes performance metrics for each time interval. This shows how performance varies throughout the benchmark duration.

For distributed benchmarks with multiple clients, the output includes per-host statistics. This reveals performance differences between individual client machines.

When filtering by operation type, the output focuses exclusively on metrics for that operation. This isolates performance characteristics of specific workload patterns.

Examples

Analyze performance over time

Examine how performance varies in 30-second intervals:

warp analyze benchmark-data.csv.zst --analyze.dur 30s

This segmentation reveals performance degradation or improvements during the test run.

Analyze specific operations

Focus the analysis on GET operation performance:

warp analyze benchmark-data.csv.zst --analyze.op GET

This isolates read performance characteristics from other operations.

Exclude warmup period

Skip the first minute of benchmark data to exclude warmup effects:

warp analyze benchmark-data.csv.zst --analyze.skip 1m

This provides analysis of steady-state performance after initialization completes.

Save analysis results

Export aggregated analysis data to a file:

warp analyze benchmark-data.csv.zst --analyze.out results.csv

The output file contains processed metrics suitable for further analysis or reporting.

Analyze specific client

Examine performance from a single client in a distributed benchmark:

warp analyze benchmark-data.csv.zst --analyze.host warp-client-3

This reveals whether individual clients experience different performance characteristics.

Generate JSON output

Output analysis results in machine-readable JSON format:

warp analyze benchmark-data.csv.zst --json

Use this format to integrate Warp analysis into automated monitoring or reporting systems.