warp cmp
Compare two benchmark data files to identify performance differences and regressions.
Synopsis
Parameters
BEFORE_FILE
The path to the baseline benchmark data file. This file represents the reference performance that you compare against. Warp saves benchmark data in CSV format with optional zstd compression.
This parameter is required.
AFTER_FILE
The path to the updated benchmark data file. This file represents the current performance to evaluate. The file must use the same format as the before file.
This parameter is required.
Flags
Analysis control
--analyze.dur
Default: automatic duration selection
Split the comparison into time-based segments of the specified duration. The default behavior selects an appropriate duration automatically based on the total benchmark time. Use this flag to compare performance variations over time within the benchmark runs.
Valid duration formats include 1s, 5s, 30s, 1m, 5m.
Example:
warp cmp before.csv.zst after.csv.zst --analyze.dur 30s
--analyze.op
Default: all operation types
Filter the comparison to include only the specified operation type. The default behavior compares all operation types in the benchmark data. Use this flag to focus on specific operations like GET or PUT.
Valid operation types include GET, PUT, DELETE, and operation-specific values like STAT.
Example:
warp cmp before.csv.zst after.csv.zst --analyze.op GET
--analyze.host
Default: all hosts
Filter the comparison to include only operations from the specified host. The default behavior includes operations from all hosts in distributed benchmarks. Use this flag to compare performance of individual client hosts.
Specify the host using the hostname or address that appears in the benchmark data.
Example:
warp cmp before.csv.zst after.csv.zst --analyze.host client-1.example.com
--analyze.skip
Default: 0s
Skip the specified duration at the beginning of both benchmark data files. The default behavior includes all recorded operations in the comparison. Use this flag to exclude warmup periods from both benchmarks.
Valid duration formats include 30s, 1m, 5m.
Example:
warp cmp before.csv.zst after.csv.zst --analyze.skip 1m
--analyze.v
Default: false
Display additional comparison details in the output. The default behavior shows standard performance differences. Use this flag to see detailed breakdowns and per-host comparisons.
Example:
warp cmp before.csv.zst after.csv.zst --analyze.v
Output control
--analyze.out
Default: console output only
Write aggregated comparison data to the specified output file. The default behavior displays results only to the console. Use this flag to save processed comparison data for further analysis.
Specify a filename with an appropriate extension for the output format.
Example:
warp cmp before.csv.zst after.csv.zst --analyze.out comparison.csv
--full
Default: false (aggregates data)
Record full detailed statistics for every individual operation in the comparison. The default behavior aggregates operations to reduce output size. Use this flag when you need complete per-operation data for detailed analysis.
This flag significantly increases processing time for large benchmark files.
Example:
warp cmp before.csv.zst after.csv.zst --full
--serve
Default: not enabled
Open a webserver on the specified address to serve comparison results remotely. The default behavior displays results only in the console. Use this flag to make comparison results accessible via HTTP.
Specify the listen address in the format host:port or localhost:7762.
Example:
warp cmp before.csv.zst after.csv.zst --serve localhost:7762
Display control
--no-color
Default: false
Disable color output in the terminal. The default behavior displays colored output for better readability. Use this flag when redirecting output to files or when terminal colors cause display issues.
Example:
warp cmp before.csv.zst after.csv.zst --no-color
--debug
Default: false
Enable detailed debug output for troubleshooting. The default behavior shows only comparison results. Use this flag to see internal processing and diagnostic information.
Example:
warp cmp before.csv.zst after.csv.zst --debug
TLS options
--insecure
Default: false
Disable TLS certificate verification when serving results remotely. The default behavior validates TLS certificates for secure connections. Use this flag only when necessary for testing with self-signed certificates.
Example:
warp cmp before.csv.zst after.csv.zst --serve :7762 --insecure
Shell integration
--autocompletion
Default: false
Install shell autocompletion for Warp commands. The default behavior does not modify shell configuration. Use this flag once to enable tab completion in your shell environment.
Example:
warp cmp --autocompletion
Comparison methodology
The cmp command performs statistical comparison between two benchmark runs. This comparison reveals performance changes, regressions, or improvements.
The comparison process loads both benchmark data files. The command verifies that both files contain compatible data types. Mixed aggregated and non-aggregated files cannot be compared together.
For each operation type present in both files, the command calculates performance differences. The comparison includes throughput changes measured in operations per second and bandwidth in MB/s. Latency changes show differences across percentile levels including p50, p90, p99, and p99.9. Error rate differences display changes in failed operation counts and percentages.
Interpreting results
The comparison output shows changes from the before file to the after file. Positive percentages indicate improvements in the after file. Negative percentages indicate regressions in the after file.
For throughput metrics, increases represent better performance. For latency metrics, decreases represent better performance. The comparison automatically presents changes with the correct interpretation.
The output highlights configuration differences between the benchmark runs. This includes changes in concurrency levels, duration, object sizes, and endpoint counts. Configuration differences help explain performance variations.
When comparing distributed benchmarks, per-host comparisons reveal which clients improved or regressed. This information helps identify network or system-specific performance issues.
Comparison output
The comparison produces a detailed performance analysis.
The output displays operation types separately for mixed workload benchmarks. For each operation type, the comparison shows total operation counts in both runs. Concurrency differences appear when the benchmarks used different concurrent operation counts. Endpoint differences show when comparing benchmarks with different client or server configurations.
Throughput comparison displays average throughput changes and percentage differences. Request latency comparison shows changes in average, fastest, median, and slowest request times. Time to first byte (TTFB) comparison appears for operations that measure this metric.
When errors occur in either benchmark, error counts appear in red highlighting. The comparison shows error count changes from the before file to the after file.
For time-segmented analysis, the comparison includes performance changes for each time interval. This reveals whether performance improvements or regressions occurred consistently or only during specific periods.
Examples
Basic comparison
Compare two benchmark runs to identify performance changes:
warp cmp baseline.csv.zst current.csv.zst
This displays overall performance differences between the baseline and current benchmarks.
Compare specific operation
Focus the comparison on GET operation performance:
warp cmp before.csv.zst after.csv.zst --analyze.op GET
This isolates read performance changes from other operations.
Compare with time segments
Compare performance in 30-second intervals:
warp cmp before.csv.zst after.csv.zst --analyze.dur 30s
This reveals whether performance changes occurred consistently or varied over time.
Skip warmup period
Exclude the first minute from both benchmarks:
warp cmp before.csv.zst after.csv.zst --analyze.skip 1m
This ensures the comparison focuses on steady-state performance.
Detailed comparison
Display additional comparison details:
warp cmp before.csv.zst after.csv.zst --analyze.v
This shows per-host breakdowns and additional statistical information.
Compare specific client
Compare performance from a single client in distributed benchmarks:
warp cmp before.csv.zst after.csv.zst --analyze.host warp-client-3
This reveals whether specific clients experienced performance changes.
Save comparison results
Export comparison data to a file:
warp cmp baseline.csv.zst updated.csv.zst --analyze.out comparison-report.csv
The output file contains processed comparison data for further analysis or reporting.
Serve comparison remotely
Make comparison results available via HTTP:
warp cmp before.csv.zst after.csv.zst --serve localhost:7762
Access the results at http://localhost:7762 from a web browser.