warp merge

Merge multiple benchmark data files into a single combined dataset for aggregate analysis.

Synopsis

Parameters

FILES

The paths to two or more benchmark data files to merge. Warp saves benchmark data in CSV format with optional zstd compression. All input files must use compatible formats and operation types.

At least two files are required. You can merge any number of files in a single merge operation.

Flags

Output control

--benchdata

Default: auto-generated filename

Specify the output filename for the merged benchmark data. The default behavior generates a unique filename with timestamp. Use this flag to control the output filename and location.

The merged file is saved in compressed CSV format with .csv.zst extension.

Example:

warp merge run1.csv.zst run2.csv.zst --benchdata combined.csv.zst

Display control

--no-color

Default: false

Disable color output in the terminal. The default behavior displays colored output for better readability. Use this flag when redirecting output to files or when terminal colors cause display issues.

Example:

warp merge run1.csv.zst run2.csv.zst --no-color

--debug

Default: false

Enable detailed debug output for troubleshooting. The default behavior shows only standard merge progress messages. Use this flag to see internal processing and diagnostic information.

Example:

warp merge run1.csv.zst run2.csv.zst --debug

TLS options

--insecure

Default: false

Disable TLS certificate verification for remote operations. The default behavior validates TLS certificates for secure connections. Use this flag only when necessary for testing environments.

Example:

warp merge run1.csv.zst run2.csv.zst --insecure

Shell integration

--autocompletion

Default: false

Install shell autocompletion for Warp commands. The default behavior does not modify shell configuration. Use this flag once to enable tab completion in your shell environment.

Example:

warp merge --autocompletion

Merge behavior

The merge command combines benchmark operations from multiple files into a single dataset. This process preserves all operation details including timestamps, latencies, and host information.

The merge process loads all input files sequentially. Each operation from each file is added to the combined dataset. Thread identifiers are adjusted to ensure unique thread numbering across all merged operations. This prevents thread ID conflicts when merging results from different benchmark runs.

The merged operations are sorted by start time to maintain chronological order. This ordering ensures that analysis tools process operations in the sequence they actually occurred.

The output file contains all operations from all input files. Host information is preserved for distributed benchmarks. Error information and timing data remain intact for each operation.

Merge validation

After merging, the command validates that each operation type has overlapping time ranges. This validation ensures the merged data represents coherent benchmark runs. Operations that do not overlap in time may indicate configuration issues or interrupted benchmarks.

The command displays validation results for each operation type. Warning messages appear if operation types have no overlapping time ranges.

Use cases

Merge benchmark files to combine results from multiple test runs. This creates a larger dataset that provides more statistical confidence in the results.

Merge results from distributed clients when client machines save local benchmark data. This consolidates distributed benchmark results into a single file for analysis.

Merge multiple iterations of the same benchmark to aggregate performance data. This approach helps identify consistency or variability across test runs.

Merge benchmark segments collected over time to build performance trend datasets. This enables long-term performance tracking and analysis.

Merged output

The merged file contains the combined operation records from all input files.

The output preserves all operation metadata including operation type, object size, and timing information. Host identifiers remain intact showing which machine or client generated each operation. Error information is maintained for any failed operations in the original files.

The merged file uses compressed CSV format with .csv.zst extension. This format is compatible with all Warp analysis commands. You can analyze merged files using warp analyze or compare them using warp cmp.

Examples

Merge multiple runs

Combine three benchmark runs into a single dataset:

warp merge run1.csv.zst run2.csv.zst run3.csv.zst --benchdata aggregated.csv.zst

The merged file contains all operations from all three runs.

Merge distributed results

Combine results from multiple client machines:

warp merge client1-results.csv.zst client2-results.csv.zst client3-results.csv.zst \
  --benchdata distributed-combined.csv.zst

This consolidates distributed benchmark data into a single file for analysis.

Merge with pattern matching

Combine all benchmark files in a directory:

warp merge benchmark-*.csv.zst --benchdata all-benchmarks.csv.zst

The shell expands the pattern to match all files with the specified prefix.

Auto-generated filename

Merge files without specifying output filename:

warp merge run1.csv.zst run2.csv.zst

Warp generates a unique filename with timestamp automatically.

Merge and analyze

Merge multiple files then immediately analyze the combined results:

warp merge run1.csv.zst run2.csv.zst run3.csv.zst --benchdata merged.csv.zst
warp analyze merged.csv.zst

This workflow combines data from multiple runs and generates aggregate statistics.

Merge time series data

Combine daily benchmark runs to build a performance history:

warp merge day1.csv.zst day2.csv.zst day3.csv.zst day4.csv.zst day5.csv.zst \
  --benchdata weekly-performance.csv.zst

The merged file contains a week of performance data for trend analysis.

Merge and compare

Merge multiple baseline runs and compare against a test run:

warp merge baseline1.csv.zst baseline2.csv.zst baseline3.csv.zst \
  --benchdata baseline-combined.csv.zst
warp cmp baseline-combined.csv.zst test-run.csv.zst

This approach compares a single test run against an aggregate baseline.