Description
The project has pytest-profiling in dev dependencies but no dedicated benchmarks. For a library that processes network configurations — which can be thousands of lines on large devices — having baseline performance data would be valuable.
Areas where performance characteristics are unknown or potentially interesting:
- Parsing large configs (10,000+ lines) via
_load_from_string_lines() vs get_hconfig_fast_load()
config_to_get_to() on configs with many differences
all_children_sorted() on deeply nested hierarchies (repeated sorted() calls)
HConfigChildren.rebuild_mapping() cost on frequent deletions
- Memory usage for large config trees (benefit of
__slots__)
Proposed Improvement
- Add a
benchmarks/ directory or pytest benchmark fixtures
- Create representative large config samples (or generators)
- Measure parsing, diffing, and iteration performance
- Document expected performance characteristics in docs