Understanding results

Warp reports several key metrics that indicate storage system performance. Use these metrics to help you evaluate whether your system meets requirements.

Throughput

Throughput measures how much data the storage system transfers per second. Warp reports throughput in MiB/s (mebibytes per second) and operations per second.

Throughput: 450.23 MiB/s, 90.05 obj/s

Higher throughput indicates better performance. Compare throughput against your application requirements or vendor specifications.

Small objects may show significantly worse throughput than expected. Look at objects/second rather than throughput in such situations.

Latency

Latency measures how long individual operations take to complete. Lower latency means faster response times for applications. Warp reports latency in milliseconds (ms).

For exact definitions of the latency metrics and percentile nomenclature, see Latency metrics and percentiles.

Average latency: 18.2 ms

Single-digit millisecond latency indicates excellent performance for small objects. Sub-second latency works well for most applications. Multi-second latency may indicate performance problems or system overload.

Percentiles

Percentile metrics show performance consistency across all operations. The median (P50) shows typical performance. Higher percentiles (P90, P99) show worst-case performance that affects some operations.

P50: 13 ms | P90: 20 ms | P99: 40 ms

Small gaps between percentiles indicate consistent performance. Large gaps indicate inconsistent performance with occasional slow operations. The P99 metric is useful for testing performance of latency-sensitive applications where slow outliers impact user experience.

See the Latency percentile for exact percentile definitions and examples.

Interpret results

Good performance depends on your specific requirements and workload. Compare test results against your performance goals.

  • Check that throughput meets your data transfer requirements.
  • Verify that median latency provides acceptable application responsiveness.
  • Ensure P99 latency remains within acceptable bounds for your use case.
  • Look for consistent performance across test runs.

If results fall short, verify that all items revealed by the [mc support diag](https://docs.min.io/enterprise/aistor-object-store/reference/cli/mc-support/mc-support-diag/?jmp=warp-docs) command are resolved. If the issues revealed by mc support diag` are resolved and the results still fall short, consider adjusting concurrency levels or upgrading hardware.

Refer to the Advanced Features page for additional configuration options.

Also consult the First-byte latency section to understand TTFB behavior and its impact on throughput measurements.