This repository provides two Python scripts for processing and comparing Divan benchmark results. They are designed to detect performance regressions and generate human-readable reports.
Converts Divan's tree-table benchmark output to JSON format for further processing.
# From a file
./parse_divan.py bench_output.txt -o results.json
# From stdin
cargo bench 2>&1 | ./parse_divan.py - -o results.jsonThe output JSON contains an array of benchmark results, each with name, fastest, slowest, median, mean, samples, and iters fields.
Compares two JSON benchmark files (base vs PR) and generates a markdown report with performance change indicators.
./compare_divan.py base.json pr.json --title "Benchmark Results" -o report.mdOptions:
--metric: Metric to compare (fastest,slowest,median,mean; default:mean)--improvement-threshold: Threshold for improvement detection (default:1%)--warn-threshold: Threshold for warning indicator (default:5.0%)--error-threshold: Threshold for regression indicator (default:10.0%)--subtitle: Optional subtitle displayed below the title