From 1cab6af4d2530eea0370c5569e48ea02f8522429 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Mon, 4 May 2026 22:17:45 -0400 Subject: [PATCH 01/47] Improve report layout and backup coverage --- .gitignore | 1 + README.md | 34 +- ROADMAP.md | 37 +- meraki_backup.py | 190 +- ollama_review.py | 62 +- reporting/app.py | 2051 ++++++++++++++--- reporting/common.py | 29 +- reporting/html_shell.py | 305 ++- .../reference/meraki_hardware_catalog.json | 179 ++ reporting/reference/pricing_reference.json | 340 +++ reporting/sections.py | 556 ++++- reporting/topology.py | 87 +- run.sh | 84 +- tests/test_backup.py | 45 + tests/test_pipeline.py | 136 +- tests/test_report.py | 781 +++++++ tests/test_topology.py | 84 + 17 files changed, 4501 insertions(+), 500 deletions(-) create mode 100644 reporting/reference/meraki_hardware_catalog.json create mode 100644 reporting/reference/pricing_reference.json create mode 100644 tests/test_topology.py diff --git a/.gitignore b/.gitignore index 4f46f58..c14f4fd 100644 --- a/.gitignore +++ b/.gitignore @@ -75,6 +75,7 @@ ipython_config.py __pypackages__/ backups/ +reports/ meraki_backup_*/ meraki_backup_sample_*/ */report.pdf diff --git a/README.md b/README.md index 40e51f1..95793dd 100644 --- a/README.md +++ b/README.md @@ -51,6 +51,7 @@ Generate a demo report from sanitized fixtures without Meraki API access: ```bash ./run.sh --demo-report --no-open +./run.sh --demo-report --fixed-now 2026-05-02T21:30:00 --no-open ``` Optional — specify a local Ollama model for AI-enhanced recommendations: @@ -68,15 +69,21 @@ ollama pull gemma4:e2b ## Output -All output is written to `backups//` (gitignored): +`./run.sh` keeps raw Meraki backup data in `backups//` and writes generated +shareable reports to `reports/` (both gitignored): - `recommendations.md` — per-org findings and recommendations -- `SITE_NAME_Complete_Report_YYYY-MM-DD.html` / `.pdf` — named full report for sharing -- `SITE_NAME_Executive_Summary_Report_YYYY-MM-DD.html` / `.pdf` — named executive summary -- `SITE_NAME_Backup_Settings_Report_YYYY-MM-DD.html` / `.pdf` — named backup settings report -- `report.html` / `report.pdf` — compatibility aliases for older scripts - `backups/master_recommendations.md` — combined across all orgs - `backups/recommendations_ai_enhanced.md` — LLM-reviewed version +- `reports///SITE_NAME_Complete_Report_YYYY-MM-DD.pdf` — run-specific full report +- `reports///SITE_NAME_Executive_Summary_Report_YYYY-MM-DD.pdf` — run-specific executive summary +- `reports///SITE_NAME_Backup_Settings_Report_YYYY-MM-DD.pdf` — run-specific backup settings report +- `reports/latest//report.pdf` — compatibility alias for the latest full report + +By default `run.sh` passes `--pdf-only`, so generated HTML is removed after PDFs +are rendered. Use `./run.sh --keep-html` when HTML inspection is useful. +Direct `python3 -m reporting` remains backward-compatible and writes reports into +each `backups//` directory unless `--reports-dir` or `--output-dir` is used. ## Optional Pricing Input @@ -84,6 +91,12 @@ To enable the Hardware Cost & Refresh Plan section, create a `pricing.json` at t or within a specific org backup directory. See `pricing.json.example` for the expected shape. Set `unit_cost` and optional `replacement_cycle_years` per model. +The UniFi migration section also reads `reporting/reference/pricing_reference.json`, which +contains maintained public UniFi planning prices, product source URLs, UI Care add-ons, and +Meraki-to-UniFi model-family mappings. Use an org-local `pricing.json` whenever reseller, +E-rate, Meraki, support, optics, or professional-services pricing needs to override the +public planning reference. + ## Requirements Install dependencies: @@ -118,10 +131,21 @@ Run the script entrypoint against existing backups: ```bash python3 -m reporting +python3 -m reporting --reports-dir reports --pdf-only python3 -m reporting --source-dir tests/fixtures --org-name "Fixture Demo Org" --output-dir backups/.demo/Fixture_Demo_Org ./run.sh --report-only --no-ai-review --no-open ``` +Generate deterministic fixture output for regression checks: + +```bash +./run.sh --demo-report --fixed-now 2026-05-02T21:30:00 --no-open +python3 -m reporting --source-dir tests/fixtures --org-name "Fixture Demo Org" --output-dir backups/.demo/Fixture_Demo_Org --fixed-now 2026-05-02T21:30:00 +``` + +The same fixed clock can be set for compatible report-generation paths with +`MERAKI_REPORT_FIXED_NOW=2026-05-02T21:30:00`. + Run tests: ```bash diff --git a/ROADMAP.md b/ROADMAP.md index c9130e8..173a210 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -6,14 +6,17 @@ This project is currently functional as a Python reporting pipeline. The immedia - `./run.sh` is the main pipeline runner. - Python dependencies install cleanly into `.venv`. -- Tests pass locally: `80 passed`. +- Tests pass locally: `115 passed`. - Report-only generation works from existing `backups/`. +- `run.sh` now separates generated report deliverables into `reports/` while leaving raw backup data in `backups/`. - `.env` is gitignored and should remain local because it may contain `MERAKI_API_KEY`. - Clean-history repository is published at `https://github.com/techmore/TM-Meraki_Baseline_Reporter.git`. - `legacy/` contains historical scripts that should not be run in production. - `docs/cis-meraki-reference.md` preserves the useful upstream CIS mapping as reference material. - Generated reports now include named aliases like `SITE_NAME_Complete_Report_YYYY-MM-DD.pdf`. - Ollama review unloads the active model after each generation pass to reduce idle RAM usage. +- Deterministic report generation is available with `./run.sh --fixed-now ...`, + `python -m reporting --fixed-now ...`, or `MERAKI_REPORT_FIXED_NOW`. ## Phase 1: Stabilize The Existing Python App - Complete @@ -56,11 +59,32 @@ This project is currently functional as a Python reporting pipeline. The immedia - full API collection - report-only from existing backups - fixture/demo report generation -- Improve AI review controls: - - default low-RAM model - - explicit model override - - no-AI mode for deterministic runs -- Keep report rendering deterministic enough that tests can catch regressions. +- ~~Improve AI review controls:~~ + - ~~default low-RAM model~~ + - ~~explicit model override~~ + - ~~no-AI mode for deterministic runs~~ +- ~~Keep report rendering deterministic enough that tests can catch regressions.~~ +- ~~Increase table-of-contents density and make TOC titles link to report sections.~~ +- ~~Add report page furniture:~~ + - ~~header with `TM Meraki Baseline`~~ + - ~~page `current / total` footer~~ + - ~~release number based on the report release date~~ + - ~~end-of-report page~~ +- ~~Fix switch port issue classification so disconnected/unused ports are not reported as issues.~~ +- ~~Improve switch identification in issue tables by showing switch labels alongside serial numbers.~~ +- ~~Investigate why Client Analysis is blank for current backups and add fallback rendering from `clients_overview.json`.~~ +- ~~Investigate blank Switch Deep Dive sections and improve fallback messaging when port telemetry is missing.~~ +- ~~Increase switch deep-dive table density so the wide port table fits PDF pages.~~ +- ~~Add firmware status/current-vs-available rendering from Meraki firmware upgrade data.~~ +- ~~Highlight EOL/EOS inventory: red when end of support is within 2 years, yellow when announced farther out.~~ +- ~~Further compress switch deep-dive table font, padding, and badge density for PDF fit.~~ +- ~~Replace heuristic UniFi comparison pricing with maintained JSON-backed pricing/equivalent references for Meraki and UniFi.~~ +- ~~Add Meraki hardware capability data, including PoE budgets, from a maintained JSON reference instead of estimates.~~ +- ~~Review the proposed K-12 VLAN structure and add it as a supplemental/reference section if it fits the report audience.~~ +- ~~Clean up completed-report quality issues: suppress benign mesh 404s, collapse disabled default SSIDs, remove empty AP model cells, fix 100 Gbps speed labeling, filter disconnected deep-dive port badges, and avoid false "no significant issues" messages.~~ +- ~~Replace unreliable wireless-only client collection with network-wide client collection and report wired/wireless client detail coverage.~~ +- ~~Separate generated report deliverables into `reports/` and keep `backups/` focused on raw collection data.~~ +- ~~Add PDF-only output mode so routine runs do not retain generated HTML unless requested.~~ ## Phase 5: Optional Interfaces @@ -75,6 +99,7 @@ This project is currently functional as a Python reporting pipeline. The immedia - Run `./run.sh --report-only --no-ai-review --no-open`. - Check `git status --short`. - Confirm `.env` and `backups/` are not staged. +- Confirm `reports/` is not staged unless a sanitized sample is intentionally added. - Confirm generated or customer-specific report files are not staged unless sanitized. - Commit the surgical changes. - Push to `https://github.com/techmore/TM-Meraki_Baseline_Reporter.git` after verification. diff --git a/meraki_backup.py b/meraki_backup.py index 27cb518..a6df20c 100755 --- a/meraki_backup.py +++ b/meraki_backup.py @@ -215,6 +215,18 @@ def recommend_switch_ports( port_map[pid] = cfg configs_by_serial_port[serial] = port_map + def _meaningful_port_messages(messages: List[str], is_uplink: bool) -> List[str]: + benign_fragments = ("disconnected", "not connected", "no link", "link down", "down") + result = [] + for message in messages: + text = str(message or "").strip() + if not text: + continue + if not is_uplink and any(fragment in text.lower() for fragment in benign_fragments): + continue + result.append(text) + return result + for serial, ports in port_statuses.items(): cfg_map = configs_by_serial_port.get(serial, {}) for p in ports: @@ -249,7 +261,7 @@ def recommend_switch_ports( "issue": "Uplink disconnected", "detail": "Disconnected", }) - errors = [e for e in (p.get("errors") or []) if not (e in ("Port disconnected", "Port disabled") and not p.get("isUplink"))] + errors = _meaningful_port_messages(p.get("errors") or [], bool(p.get("isUplink"))) if errors: findings.append({ "serial": serial, @@ -257,7 +269,7 @@ def recommend_switch_ports( "issue": "Port errors", "detail": ", ".join(errors), }) - warnings = p.get("warnings") or [] + warnings = _meaningful_port_messages(p.get("warnings") or [], bool(p.get("isUplink"))) if warnings: findings.append({ "serial": serial, @@ -415,6 +427,8 @@ def summarize_ap_clients(clients_by_network: Dict[str, Any]) -> Dict[str, Any]: if not isinstance(data, list): continue for c in data: + if c.get("recentDeviceConnection") not in (None, "Wireless"): + continue serial = c.get("recentDeviceSerial") or c.get("recentDeviceSerialNumber") if not serial: continue @@ -872,7 +886,7 @@ def build_recommendations( ap_clients = ap_client_summary.get("ap_client_counts") or [] if ap_clients: lines.append("## Wireless Client Load") - lines.append("- Top APs by client count (last 1 hour). Investigate if sustained high load.") + lines.append("- Top APs by wireless client count (last 24 hours). Investigate if sustained high load.") for serial, count in ap_clients[:10]: lines.append(f"- AP {serial}: {count} clients") lines.append("") @@ -1127,11 +1141,20 @@ def _cached_safe_get(filename: str, path_suffix: str, label: str, params=None) - clients_overview = {} wireless_rf_profiles = {} wireless_settings = {} + network_clients = {} wireless_clients = {} wireless_ssids = {} alerts_history = {} appliance_baseline = {} appliance_uplinks_usage = {} + appliance_vlans = {} + appliance_dhcp_subnets = {} + appliance_policy_backup = {} + appliances_by_network: Dict[str, List[Dict[str, Any]]] = {} + for appliance in devices_by_type.get("appliance", []): + net_id_for_appliance = appliance.get("networkId") + if net_id_for_appliance: + appliances_by_network.setdefault(net_id_for_appliance, []).append(appliance) if networks: log_line(log_f, "INFO", f"Collecting network-level telemetry for {len(networks)} network(s) in {org_name}") for idx, net in enumerate(networks, start=1): @@ -1190,16 +1213,19 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ "Wireless settings unavailable", capability_aware=True, ) - wireless_clients[net_id] = _load_or_fetch_net( - "wireless_clients.json", + network_clients[net_id] = _load_or_fetch_net( + "network_clients.json", lambda: safe_paged_get( - f"/networks/{net_id}/wireless/clients", + f"/networks/{net_id}/clients", api_key, - params={"timespan": TIMESPAN_1H}, + params={"timespan": TIMESPAN_24H}, ), - "Wireless clients unavailable", - capability_aware=True, + "Network clients failed", ) + wireless_clients[net_id] = [ + c for c in network_clients.get(net_id, []) + if isinstance(c, dict) and c.get("recentDeviceConnection") == "Wireless" + ] if isinstance(network_clients.get(net_id), list) else network_clients.get(net_id, {}) wireless_ssids[net_id] = _load_or_fetch_net( "wireless_ssids.json", lambda: safe_paged_get(f"/networks/{net_id}/wireless/ssids", api_key), @@ -1219,6 +1245,13 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ if "appliance" in (net.get("productTypes") or []): net_baseline: Dict[str, Any] = {} + policy_backup: Dict[str, Any] = {} + appliance_vlans[net_id] = _load_or_fetch_net( + "appliance_vlans.json", + lambda: safe_paged_get(f"/networks/{net_id}/appliance/vlans", api_key), + "Appliance VLANs unavailable", + capability_aware=True, + ) appliance_uplinks_usage[net_id] = _load_or_fetch_net( "appliance_uplinks_usage.json", lambda: safe_get_one( @@ -1254,18 +1287,155 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ "Appliance port forwarding unavailable", capability_aware=True, ) + policy_backup["portForwardingRules"] = net_baseline["portForwardingRules"] + policy_endpoints: List[Tuple[str, str, Callable[[], Tuple[Any, Optional[str]]], str]] = [ + ( + "l3FirewallRules", + "appliance_firewall_l3_rules.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/firewall/l3FirewallRules", api_key + ), + "Appliance L3 firewall rules unavailable", + ), + ( + "l7FirewallRules", + "appliance_firewall_l7_rules.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/firewall/l7FirewallRules", api_key + ), + "Appliance L7 firewall rules unavailable", + ), + ( + "inboundFirewallRules", + "appliance_firewall_inbound_rules.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/firewall/inboundFirewallRules", api_key + ), + "Appliance inbound firewall rules unavailable", + ), + ( + "cellularFirewallRules", + "appliance_firewall_cellular_rules.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/firewall/cellularFirewallRules", api_key + ), + "Appliance cellular firewall rules unavailable", + ), + ( + "inboundCellularFirewallRules", + "appliance_firewall_inbound_cellular_rules.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/firewall/inboundCellularFirewallRules", api_key + ), + "Appliance inbound cellular firewall rules unavailable", + ), + ( + "oneToOneNatRules", + "appliance_one_to_one_nat_rules.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/firewall/oneToOneNatRules", api_key + ), + "Appliance 1:1 NAT rules unavailable", + ), + ( + "oneToManyNatRules", + "appliance_one_to_many_nat_rules.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/firewall/oneToManyNatRules", api_key + ), + "Appliance 1:Many NAT rules unavailable", + ), + ( + "firewalledServices", + "appliance_firewalled_services.json", + lambda net_id=net_id: safe_paged_get( + f"/networks/{net_id}/appliance/firewall/firewalledServices", api_key + ), + "Appliance firewalled services unavailable", + ), + ( + "contentFiltering", + "appliance_content_filtering.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/contentFiltering", api_key + ), + "Appliance content filtering unavailable", + ), + ( + "trafficShapingRules", + "appliance_traffic_shaping_rules.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/trafficShaping/rules", api_key + ), + "Appliance traffic shaping rules unavailable", + ), + ( + "siteToSiteVpn", + "appliance_site_to_site_vpn.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/appliance/vpn/siteToSiteVpn", api_key + ), + "Appliance site-to-site VPN unavailable", + ), + ( + "groupPolicies", + "network_group_policies.json", + lambda net_id=net_id: safe_paged_get( + f"/networks/{net_id}/groupPolicies", api_key + ), + "Network group policies unavailable", + ), + ( + "syslogServers", + "network_syslog_servers.json", + lambda net_id=net_id: safe_get_one( + f"/networks/{net_id}/syslogServers", api_key + ), + "Network syslog servers unavailable", + ), + ] + for key, filename, fetcher, warn_label in policy_endpoints: + policy_backup[key] = _load_or_fetch_net( + filename, + fetcher, + warn_label, + capability_aware=True, + ) appliance_baseline[net_id] = net_baseline + appliance_policy_backup[net_id] = policy_backup + _write_granular_json(org_dir, "networks", net_id, "appliance_policy_backup.json", policy_backup) + for appliance in appliances_by_network.get(net_id, []): + serial = appliance.get("serial") + if not serial: + continue + if _granular_cache_fresh(org_dir, "appliances", serial, "dhcp_subnets.json", max_age_h, force): + dhcp_subnets = _read_granular_json(org_dir, "appliances", serial, "dhcp_subnets.json") + else: + dhcp_subnets, dhcp_err = safe_paged_get( + f"/devices/{serial}/appliance/dhcp/subnets", + api_key, + ) + dhcp_subnets = dhcp_subnets if not dhcp_err else {"error": dhcp_err} + _write_granular_json(org_dir, "appliances", serial, "dhcp_subnets.json", dhcp_subnets) + if dhcp_err: + level = "INFO" if is_capability_error(dhcp_err) else "WARN" + log_line(log_f, level, f"Appliance DHCP subnets unavailable for {serial}: {dhcp_err}") + appliance_dhcp_subnets[serial] = dhcp_subnets write_json(_pf("wireless_connection_stats.json"), wireless_connection_stats) write_json(_pf("wireless_mesh_statuses.json"), wireless_mesh_statuses) write_json(_pf("clients_overview.json"), clients_overview) write_json(_pf("wireless_rf_profiles.json"), wireless_rf_profiles) write_json(_pf("wireless_settings.json"), wireless_settings) + write_json(_pf("network_clients.json"), network_clients) write_json(_pf("wireless_clients.json"), wireless_clients) write_json(_pf("wireless_ssids.json"), wireless_ssids) write_json(_pf("alerts_history.json"), alerts_history) write_json(_pf("appliance_uplinks_usage.json"), appliance_uplinks_usage) + write_json(_pf("appliance_vlans.json"), appliance_vlans) + write_json(_pf("appliance_dhcp_subnets.json"), appliance_dhcp_subnets) + write_json(_pf("appliance_policy_backup.json"), appliance_policy_backup) write_json(_pf("inventory_summary.json"), inventory_summary) _sb_path = _pf("security_baseline.json") if force or not _cache_is_fresh(_sb_path, max_age_h=max_age_h, force=False): @@ -1278,7 +1448,7 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ # Recommendations wireless_summary = summarize_wireless_connection_stats(wireless_connection_stats) rf_summary = summarize_rf_profiles(wireless_rf_profiles) - ap_client_summary = summarize_ap_clients(wireless_clients) + ap_client_summary = summarize_ap_clients(network_clients) switch_findings = recommend_switch_ports(port_statuses, port_configs) poe_summary = summarize_poe_power(port_statuses, TIMESPAN_24H) _ch_path = _pf("channel_utilization_by_device.json") diff --git a/ollama_review.py b/ollama_review.py index a24caeb..4259e6c 100644 --- a/ollama_review.py +++ b/ollama_review.py @@ -6,6 +6,7 @@ Exits 0 (non-fatal) if Ollama is unavailable so the pipeline continues. Output: /recommendations_ai_enhanced.md """ +import argparse import json import logging import os @@ -34,10 +35,56 @@ # or: ./run.sh --model qwen3.5:9b _DEFAULT_MODEL = "gemma4:e2b" MODEL = os.getenv("OLLAMA_MODEL", _DEFAULT_MODEL) +CONFIG_ERRORS: list[str] = [] + + +def _env_int(name: str, default: int) -> int: + raw = os.getenv(name) + if raw in (None, ""): + return default + try: + value = int(raw) + except ValueError: + CONFIG_ERRORS.append(f"{name} must be an integer") + return default + if value <= 0: + CONFIG_ERRORS.append(f"{name} must be greater than zero") + return default + return value + # Keep chunks conservative so small local models have room for the prompt # and generated review while still preserving section boundaries. -MAX_INPUT_CHARS = 50_000 +MAX_INPUT_CHARS = _env_int("OLLAMA_MAX_INPUT_CHARS", 50_000) + + +def configure_ai(model: str | None = None, max_input_chars: int | None = None) -> None: + """Apply runtime AI review options.""" + global MODEL, MAX_INPUT_CHARS + if model: + MODEL = model + if max_input_chars is not None: + if max_input_chars <= 0: + raise ValueError("max_input_chars must be greater than zero") + MAX_INPUT_CHARS = max_input_chars + + +def parse_args(argv: list[str] | None = None) -> argparse.Namespace: + parser = argparse.ArgumentParser( + description="Review merged Meraki recommendations with a local Ollama model.", + ) + parser.add_argument( + "-m", + "--model", + help=f"Ollama model to use. Default: {MODEL}", + ) + parser.add_argument( + "--max-input-chars", + type=int, + default=None, + help=f"Maximum characters per review chunk. Default: {MAX_INPUT_CHARS}", + ) + return parser.parse_args(argv) SYSTEM_PROMPT = """\ You are a senior network engineer with deep expertise in Cisco Meraki enterprise deployments, \ @@ -277,7 +324,16 @@ def review_content(content: str) -> str: ) -def main() -> int: +def main(argv: list[str] | None = None) -> int: + try: + args = parse_args([] if argv is None else argv) + if CONFIG_ERRORS and args.max_input_chars is None: + raise ValueError(CONFIG_ERRORS[0]) + configure_ai(model=args.model, max_input_chars=args.max_input_chars) + except ValueError as exc: + print(f"ollama_review.py: error: {exc}", file=sys.stderr) + return 2 + master_rec = os.path.join(BACKUPS_DIR, "master_recommendations.md") if not os.path.exists(master_rec): log.warning("master_recommendations.md not found at %s", master_rec) @@ -320,4 +376,4 @@ def main() -> int: if __name__ == "__main__": - raise SystemExit(main()) + raise SystemExit(main(sys.argv[1:])) diff --git a/reporting/app.py b/reporting/app.py index 30f936b..d7ae5a9 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -1,6 +1,7 @@ #!/usr/bin/env python3 import argparse import logging +import math import os import re import shutil @@ -10,8 +11,10 @@ from .common import ( BACKUPS_DIR, REPORT_VERSION, + _format_usage_kb, _he, _hardware_consistency_note, + _is_sfp_like_port, _model_capability_summary, build_fallback_security_checks, check_backup_schema, @@ -25,15 +28,31 @@ from .topology import _topo_pages, _topo_summary_rows, _topo_svg from .sections import ( _build_ap_interference_section, + _build_addressing_dhcp_section, + _build_appliance_policy_section, _build_budget_forecast_section, _build_config_coverage_section, _build_switch_detail_section, _build_wan_capacity_section, + _is_low_speed_link, + _model_cell, ) from .html_shell import build_html, write_pdf log = logging.getLogger(__name__) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) +FIXED_NOW_ENV = "MERAKI_REPORT_FIXED_NOW" +REPORTS_DIR = os.path.join(BASE_DIR, "reports") +HARDWARE_CATALOG_PATH = os.path.join( + os.path.dirname(os.path.abspath(__file__)), + "reference", + "meraki_hardware_catalog.json", +) +PRICING_REFERENCE_PATH = os.path.join( + os.path.dirname(os.path.abspath(__file__)), + "reference", + "pricing_reference.json", +) def _report_slug(name: str) -> str: @@ -46,6 +65,41 @@ def _dated_report_name(org_name: str, label: str, run_ts: datetime, ext: str) -> return f"{_report_slug(org_name)}_{label}_Report_{date_stamp}.{ext}" +def _current_run_ts() -> datetime: + fixed_now = os.getenv(FIXED_NOW_ENV) + if fixed_now: + try: + return datetime.fromisoformat(fixed_now.replace("Z", "+00:00")) + except ValueError: + log.warning("Ignoring invalid %s value: %s", FIXED_NOW_ENV, fixed_now) + return datetime.now() + + +def _validate_fixed_now(value: str) -> str: + try: + datetime.fromisoformat(value.replace("Z", "+00:00")) + except ValueError: + raise argparse.ArgumentTypeError("must be an ISO timestamp, e.g. 2026-05-02T21:30:00") + return value + + +def _load_hardware_catalog(org_dir: str) -> Dict[str, Any]: + return ( + load_json(os.path.join(org_dir, "meraki_hardware_catalog.json")) + or load_json(HARDWARE_CATALOG_PATH) + or {} + ) + + +def _load_pricing_payload(org_dir: str) -> Dict[str, Any]: + return ( + load_json(os.path.join(org_dir, "pricing.json")) + or load_json(os.path.join(BASE_DIR, "pricing.json")) + or load_json(PRICING_REFERENCE_PATH) + or {} + ) + + def _read_org_name(org_dir: str) -> str: name_file = os.path.join(org_dir, "org_name.txt") if os.path.exists(name_file): @@ -63,18 +117,60 @@ def _read_org_name(org_dir: str) -> str: return org_name -def _write_text_aliases(html: str, paths: tuple[str, ...]) -> None: +def _write_text_aliases(html: str, paths: tuple[str | None, ...]) -> None: for path in paths: + if not path: + continue + os.makedirs(os.path.dirname(path), exist_ok=True) with open(path, "w", encoding="utf-8") as f: f.write(html) -def generate_org_reports(source_dir: str, org_name: str, output_dir: str | None = None) -> int: +def _copy_existing(src: str, destinations: tuple[str | None, ...]) -> None: + for dst in destinations: + if not dst or os.path.abspath(src) == os.path.abspath(dst): + continue + os.makedirs(os.path.dirname(dst), exist_ok=True) + shutil.copy2(src, dst) + + +def _cleanup_paths(paths: tuple[str, ...]) -> None: + for path in paths: + try: + if os.path.exists(path): + os.remove(path) + except OSError: + log.warning("Unable to remove generated HTML artifact: %s", path) + + +def _report_run_output_dir(reports_dir: str, org_name: str, run_ts: datetime) -> str: + return os.path.join( + reports_dir, + _report_slug(org_name), + run_ts.strftime("%Y-%m-%d_%H%M"), + ) + + +def _report_latest_output_dir(reports_dir: str, org_name: str) -> str: + return os.path.join(reports_dir, "latest", _report_slug(org_name)) + + +def generate_org_reports( + source_dir: str, + org_name: str, + output_dir: str | None = None, + *, + latest_dir: str | None = None, + keep_html: bool = True, + run_ts: datetime | None = None, +) -> int: + _run_ts = run_ts or _current_run_ts() output_dir = output_dir or source_dir os.makedirs(output_dir, exist_ok=True) + if latest_dir: + os.makedirs(latest_dir, exist_ok=True) log.info("Generating report for: %s", org_name) - _run_ts = datetime.now() _slug = _report_slug(org_name) _stamp = _run_ts.strftime("%Y-%m-%d_%H%M") @@ -86,14 +182,32 @@ def generate_org_reports(source_dir: str, org_name: str, output_dir: str | None named_pdf_alias = os.path.join(output_dir, _dated_report_name(org_name, "Complete", _run_ts, "pdf")) html_alias = os.path.join(output_dir, "report.html") pdf_alias = os.path.join(output_dir, "report.pdf") + if latest_dir: + html_path = named_html_alias + pdf_path = named_pdf_alias + html_alias = None + pdf_alias = None + latest_html_alias = os.path.join(latest_dir, _dated_report_name(org_name, "Complete", _run_ts, "html")) if latest_dir else None + latest_pdf_alias = os.path.join(latest_dir, _dated_report_name(org_name, "Complete", _run_ts, "pdf")) if latest_dir else None + latest_html_compat = os.path.join(latest_dir, "report.html") if latest_dir else None + latest_pdf_compat = os.path.join(latest_dir, "report.pdf") if latest_dir else None _write_text_aliases(html, (html_path, named_html_alias, html_alias)) - if write_pdf(html_path, pdf_path): - shutil.copy2(pdf_path, named_pdf_alias) - shutil.copy2(pdf_path, pdf_alias) + if latest_dir: + _write_text_aliases(html, (latest_html_alias, latest_html_compat)) + pdf_ok = write_pdf(html_path, pdf_path) + if pdf_ok: + _copy_existing(pdf_path, (named_pdf_alias, pdf_alias)) + if latest_dir: + _copy_existing(pdf_path, (latest_pdf_alias, latest_pdf_compat)) log.info("PDF → %s", named_pdf_alias) else: log.info("HTML → %s (no PDF tool found)", html_path) + if not keep_html and pdf_ok: + html_targets = [html_path, named_html_alias, html_alias] + if latest_dir: + html_targets.extend([latest_html_alias, latest_html_compat]) + _cleanup_paths(tuple(path for path in html_targets if path)) exec_body = build_org_report(source_dir, org_name, report_kind="exec") exec_html = build_html(f"{org_name} — Executive Summary", exec_body) @@ -103,13 +217,31 @@ def generate_org_reports(source_dir: str, org_name: str, output_dir: str | None exec_named_pdf_alias = os.path.join(output_dir, _dated_report_name(org_name, "Executive_Summary", _run_ts, "pdf")) exec_html_alias = os.path.join(output_dir, "report_exec_summary.html") exec_pdf_alias = os.path.join(output_dir, "report_exec_summary.pdf") + if latest_dir: + exec_html_path = exec_named_html_alias + exec_pdf_path = exec_named_pdf_alias + exec_html_alias = None + exec_pdf_alias = None + latest_exec_html_alias = os.path.join(latest_dir, _dated_report_name(org_name, "Executive_Summary", _run_ts, "html")) if latest_dir else None + latest_exec_pdf_alias = os.path.join(latest_dir, _dated_report_name(org_name, "Executive_Summary", _run_ts, "pdf")) if latest_dir else None + latest_exec_html_compat = os.path.join(latest_dir, "report_exec_summary.html") if latest_dir else None + latest_exec_pdf_compat = os.path.join(latest_dir, "report_exec_summary.pdf") if latest_dir else None _write_text_aliases(exec_html, (exec_html_path, exec_named_html_alias, exec_html_alias)) - if write_pdf(exec_html_path, exec_pdf_path): - shutil.copy2(exec_pdf_path, exec_named_pdf_alias) - shutil.copy2(exec_pdf_path, exec_pdf_alias) + if latest_dir: + _write_text_aliases(exec_html, (latest_exec_html_alias, latest_exec_html_compat)) + exec_pdf_ok = write_pdf(exec_html_path, exec_pdf_path) + if exec_pdf_ok: + _copy_existing(exec_pdf_path, (exec_named_pdf_alias, exec_pdf_alias)) + if latest_dir: + _copy_existing(exec_pdf_path, (latest_exec_pdf_alias, latest_exec_pdf_compat)) log.info("Exec Summary PDF → %s", exec_named_pdf_alias) else: log.info("Exec Summary HTML → %s (no PDF tool found)", exec_html_path) + if not keep_html and exec_pdf_ok: + html_targets = [exec_html_path, exec_named_html_alias, exec_html_alias] + if latest_dir: + html_targets.extend([latest_exec_html_alias, latest_exec_html_compat]) + _cleanup_paths(tuple(path for path in html_targets if path)) backup_body = build_org_report(source_dir, org_name, report_kind="backup") backup_html = build_html(f"{org_name} — Backup Settings Report", backup_body) @@ -119,13 +251,31 @@ def generate_org_reports(source_dir: str, org_name: str, output_dir: str | None backup_named_pdf_alias = os.path.join(output_dir, _dated_report_name(org_name, "Backup_Settings", _run_ts, "pdf")) backup_html_alias = os.path.join(output_dir, "report_backup_settings.html") backup_pdf_alias = os.path.join(output_dir, "report_backup_settings.pdf") + if latest_dir: + backup_html_path = backup_named_html_alias + backup_pdf_path = backup_named_pdf_alias + backup_html_alias = None + backup_pdf_alias = None + latest_backup_html_alias = os.path.join(latest_dir, _dated_report_name(org_name, "Backup_Settings", _run_ts, "html")) if latest_dir else None + latest_backup_pdf_alias = os.path.join(latest_dir, _dated_report_name(org_name, "Backup_Settings", _run_ts, "pdf")) if latest_dir else None + latest_backup_html_compat = os.path.join(latest_dir, "report_backup_settings.html") if latest_dir else None + latest_backup_pdf_compat = os.path.join(latest_dir, "report_backup_settings.pdf") if latest_dir else None _write_text_aliases(backup_html, (backup_html_path, backup_named_html_alias, backup_html_alias)) - if write_pdf(backup_html_path, backup_pdf_path): - shutil.copy2(backup_pdf_path, backup_named_pdf_alias) - shutil.copy2(backup_pdf_path, backup_pdf_alias) + if latest_dir: + _write_text_aliases(backup_html, (latest_backup_html_alias, latest_backup_html_compat)) + backup_pdf_ok = write_pdf(backup_html_path, backup_pdf_path) + if backup_pdf_ok: + _copy_existing(backup_pdf_path, (backup_named_pdf_alias, backup_pdf_alias)) + if latest_dir: + _copy_existing(backup_pdf_path, (latest_backup_pdf_alias, latest_backup_pdf_compat)) log.info("Backup Settings PDF → %s", backup_named_pdf_alias) else: log.info("Backup Settings HTML → %s (no PDF tool found)", backup_html_path) + if not keep_html and backup_pdf_ok: + html_targets = [backup_html_path, backup_named_html_alias, backup_html_alias] + if latest_dir: + html_targets.extend([latest_backup_html_alias, latest_backup_html_compat]) + _cleanup_paths(tuple(path for path in html_targets if path)) return 1 @@ -135,6 +285,7 @@ def build_org_report( exec_purpose: str = "", report_kind: str = "full", ) -> str: + _now = _current_run_ts() # ── Schema compatibility check ──────────────────────────────────────────── _schema_warnings = check_backup_schema(org_dir) _schema_banner = "" @@ -167,19 +318,25 @@ def build_org_report( wireless_stats = ( load_json(os.path.join(org_dir, "wireless_connection_stats.json")) or {} ) - # wireless_clients.json is {net_id: [client, …]} — flatten to a single list + # network_clients.json is {net_id: [client, …]} from GET /networks/{id}/clients. + # Older backups used wireless_clients.json from a now-unreliable wireless-only path. + def _flatten_client_records(raw: Any) -> List[Dict[str, Any]]: + if isinstance(raw, dict): + return [ + cl for clients in raw.values() + if isinstance(clients, list) + for cl in clients + if isinstance(cl, dict) + ] + if isinstance(raw, list): + return [cl for cl in raw if isinstance(cl, dict)] + return [] + + network_clients_raw = load_json(os.path.join(org_dir, "network_clients.json")) or {} _wc_raw = load_json(os.path.join(org_dir, "wireless_clients.json")) or {} - if isinstance(_wc_raw, dict): - wireless_clients = [ - cl for clients in _wc_raw.values() - if isinstance(clients, list) - for cl in clients - if isinstance(cl, dict) - ] - elif isinstance(_wc_raw, list): - wireless_clients = [cl for cl in _wc_raw if isinstance(cl, dict)] - else: - wireless_clients = [] + network_clients = _flatten_client_records(network_clients_raw) + wireless_clients = _flatten_client_records(_wc_raw) + client_records = network_clients or wireless_clients switch_port_statuses_by_switch = ( load_json(os.path.join(org_dir, "switch_port_statuses.json")) or {} ) @@ -198,11 +355,11 @@ def build_org_report( wireless_ssids = load_json(os.path.join(org_dir, "wireless_ssids.json")) or {} alerts_history = load_json(os.path.join(org_dir, "alerts_history.json")) or {} wireless_mesh_statuses = load_json(os.path.join(org_dir, "wireless_mesh_statuses.json")) or {} - pricing_payload = ( - load_json(os.path.join(org_dir, "pricing.json")) - or load_json(os.path.join(BASE_DIR, "pricing.json")) - or {} - ) + appliance_vlans = load_json(os.path.join(org_dir, "appliance_vlans.json")) or {} + appliance_dhcp_subnets = load_json(os.path.join(org_dir, "appliance_dhcp_subnets.json")) or {} + appliance_policy_backup = load_json(os.path.join(org_dir, "appliance_policy_backup.json")) or {} + pricing_payload = _load_pricing_payload(org_dir) + hardware_catalog = _load_hardware_catalog(org_dir) # switch_port_configs / statuses are {serial: [port, …]} dicts — flatten, # injecting switchSerial so downstream code can reference the parent switch. @@ -234,6 +391,85 @@ def _flatten_ports(path: str) -> List[Dict]: if isinstance(n, dict) and n.get("id") } + def _merge_device_metadata() -> List[Dict]: + """Availability records are status-first; enrich them with inventory labels/models.""" + metadata_by_serial: Dict[str, Dict] = {} + for source in (inventory_devices, devices_statuses_raw): + if not isinstance(source, list): + continue + for entry in source: + if not isinstance(entry, dict) or not entry.get("serial"): + continue + serial = entry["serial"] + merged = metadata_by_serial.setdefault(serial, {}) + for key in ( + "name", + "model", + "sku", + "mac", + "productType", + "networkId", + "tags", + "lanIp", + ): + if not merged.get(key) and entry.get(key): + merged[key] = entry[key] + + enriched: List[Dict] = [] + seen: set[str] = set() + for device in devices_avail if isinstance(devices_avail, list) else []: + if not isinstance(device, dict): + continue + serial = device.get("serial") + if serial: + seen.add(serial) + merged = dict(device) + for key, value in metadata_by_serial.get(serial, {}).items(): + if not merged.get(key) and value: + merged[key] = value + net_id = merged.get("networkId") or (merged.get("network") or {}).get("id") + if net_id and not merged.get("network"): + merged["network"] = { + "id": net_id, + "name": network_names.get(net_id, net_id), + } + enriched.append(merged) + + # Keep inventory-only devices visible instead of silently dropping them. + for serial, meta in sorted(metadata_by_serial.items()): + if serial in seen: + continue + device = dict(meta) + device["serial"] = serial + device.setdefault("status", "unknown") + net_id = device.get("networkId") + if net_id and not device.get("network"): + device["network"] = { + "id": net_id, + "name": network_names.get(net_id, net_id), + } + enriched.append(device) + return enriched + + devices_avail = _merge_device_metadata() + device_by_serial = { + dev.get("serial"): dev + for dev in devices_avail + if isinstance(dev, dict) and dev.get("serial") + } + catalog_models = ( + hardware_catalog.get("models") + if isinstance(hardware_catalog, dict) and isinstance(hardware_catalog.get("models"), dict) + else {} + ) + + def _known_poe_budget(model: str) -> int | float | None: + if not model: + return None + ref = catalog_models.get(model) or {} + budget = ref.get("poeBudgetWatts") + return budget if isinstance(budget, (int, float)) else None + def _parse_dt(value: str) -> datetime | None: if not value: return None @@ -349,32 +585,49 @@ def _parse_dt(value: str) -> datetime | None: # Switch port issue analysis # Note: the Meraki API returns "errors" and "warnings" as lists of strings, not integers. + def _meaningful_port_errors(errors: list[str]) -> list[str]: + benign_fragments = ( + "disconnected", + "not connected", + "no link", + "link down", + "down", + ) + result = [] + for error in errors: + text = str(error or "").strip() + if not text: + continue + lowered = text.lower() + if any(fragment in lowered for fragment in benign_fragments): + continue + result.append(text) + return result + switch_port_issues = [] if isinstance(switch_port_statuses, list): for port in switch_port_statuses[:100]: port_errors = port.get("errors") or [] # always a list if isinstance(port_errors, str): port_errors = [port_errors] + port_errors = _meaningful_port_errors(port_errors) port_warnings = port.get("warnings") or [] if isinstance(port_warnings, str): port_warnings = [port_warnings] speed_raw = port.get("speed") or "" - # speed may be "10 Mbps", "100 Mbps", 10, 100, etc. - speed_num = None - try: - speed_num = int(str(speed_raw).split()[0]) - except (ValueError, IndexError): - pass is_uplink = bool(port.get("isUplink")) if any( [ bool(port_errors), - is_uplink and speed_num in [10, 100], + is_uplink and _is_low_speed_link(speed_raw), ] ): + switch_serial = port.get("switchSerial", "Unknown") + switch_device = device_by_serial.get(switch_serial) or {} switch_port_issues.append( { - "switch": port.get("switchSerial", "Unknown"), + "switch": switch_serial, + "switch_name": switch_device.get("name") or switch_device.get("model") or switch_serial, "port": port.get("portId", "Unknown"), "errors": port_errors, # list of strings "error_count": len(port_errors), @@ -449,6 +702,24 @@ def _parse_dt(value: str) -> datetime | None: ("Config Issues", str(len(config_issues))), ] + switch_devices = [ + d for d in devices_avail + if isinstance(d, dict) and d.get("productType") == "switch" + ] + switch_budget_known = sum( + 1 for d in switch_devices if _known_poe_budget(str(d.get("model") or "")) is not None + ) + switch_budget_total = len(switch_devices) + poe_budget_note = ( + f"The local hardware catalog contains PoE budget references for " + f"{switch_budget_known} of {switch_budget_total} switch device(s) in this backup. " + "Where a model is covered, the report shows measured draw against known hardware " + "budget and calculates headroom. Models not yet in the catalog are left as unknown " + "instead of estimated." + if switch_budget_total + else "No switch inventory was available for PoE budget coverage analysis." + ) + security_checks = ( security_baseline.get("checks") if isinstance(security_baseline, dict) and security_baseline.get("checks") @@ -516,31 +787,62 @@ def _hcard(domain: str, rating: str, stat: str, detail: str) -> str: ) # WAN + def _iter_wan_uplinks(raw_uplinks: Any) -> List[Dict[str, Any]]: + rows: List[Dict[str, Any]] = [] + if not isinstance(raw_uplinks, list): + return rows + for item in raw_uplinks: + if not isinstance(item, dict): + continue + if isinstance(item.get("uplinks"), list): + for uplink in item["uplinks"]: + if isinstance(uplink, dict): + merged = dict(uplink) + merged.setdefault("serial", item.get("serial")) + merged.setdefault("model", item.get("model")) + merged.setdefault("networkId", item.get("networkId")) + rows.append(merged) + else: + rows.append(item) + return rows + + _wan_uplinks = _iter_wan_uplinks(uplink_statuses) _wan_active = sum( - 1 for u in (uplink_statuses if isinstance(uplink_statuses, list) else []) + 1 for u in _wan_uplinks if isinstance(u, dict) and str(u.get("status", "")).lower() == "active" ) + _wan_ready = sum( + 1 for u in _wan_uplinks + if isinstance(u, dict) and str(u.get("status", "")).lower() == "ready" + ) _wan_total = sum( - 1 for u in (uplink_statuses if isinstance(uplink_statuses, list) else []) + 1 for u in _wan_uplinks if isinstance(u, dict) and u.get("interface") ) - _wan_down = _wan_total - _wan_active + _wan_down = _wan_total - _wan_active - _wan_ready if _wan_total == 0: _wan_rating, _wan_stat, _wan_detail = "info", "No WAN data", "uplink status unavailable" elif _wan_down > 0: _wan_rating = "crit" if _wan_active == 0 else "warn" _wan_stat = f"{_wan_down} link{'s' if _wan_down != 1 else ''} down" - _wan_detail = f"{_wan_active} active of {_wan_total} uplinks" + _wan_detail = f"{_wan_active} active · {_wan_ready} ready of {_wan_total} uplinks" else: _wan_rating = "good" _wan_stat = f"{_wan_active} active" - _wan_detail = f"{_wan_total} uplink{'s' if _wan_total != 1 else ''} healthy" + _wan_detail = ( + f"{_wan_ready} standby-ready · {_wan_total} total" + if _wan_ready + else f"{_wan_total} uplink{'s' if _wan_total != 1 else ''} healthy" + ) _wan_card = _hcard("WAN / Internet", _wan_rating, _wan_stat, _wan_detail) # Security - _sec_fail = sum(1 for c in (security_checks or []) if isinstance(c, dict) and c.get("status") == "fail") - _sec_warn = sum(1 for c in (security_checks or []) if isinstance(c, dict) and c.get("status") == "warning") - _sec_pass = sum(1 for c in (security_checks or []) if isinstance(c, dict) and c.get("status") == "pass") + def _check_status(check: Dict[str, Any]) -> str: + return str(check.get("status") or "").strip().lower() + + _sec_fail = sum(1 for c in (security_checks or []) if isinstance(c, dict) and _check_status(c) == "fail") + _sec_warn = sum(1 for c in (security_checks or []) if isinstance(c, dict) and _check_status(c) == "warning") + _sec_pass = sum(1 for c in (security_checks or []) if isinstance(c, dict) and _check_status(c) == "pass") if _sec_fail > 0: _sec_rating = "crit" elif _sec_warn > 0: @@ -553,18 +855,36 @@ def _hcard(domain: str, rating: str, stat: str, detail: str) -> str: f"{_sec_pass} checks passed", ) - # Lifecycle (EOL heuristic — flag known legacy model prefixes) + # Lifecycle: prefer Meraki inventory EOX metadata; fall back to known legacy prefixes. _EOL_PREFIXES = ( "MR18", "MR24", "MR26", "MR32", "MR34", "MS220", "MS320", "MS420", "MX64", "MX65", "MX80", "MX84", "MX90", "MX400", "MX600", ) - _eol_models = [ + _eox_model_statuses: Dict[str, str] = {} + for _eox_dev in eox_devices: + if not isinstance(_eox_dev, dict): + continue + _model = str(_eox_dev.get("model") or "").strip() + _status = str(_eox_dev.get("status") or "").strip() + if _model and _status: + _eox_model_statuses.setdefault(_model, _status) + _eox_models = sorted(_eox_model_statuses) + _heuristic_eol_models = [ m for m, _ in top_models if any(str(m).upper().startswith(p) for p in _EOL_PREFIXES) ] + _eol_models = _eox_models or _heuristic_eol_models + _eox_crit_count = sum(1 for d in eox_devices if str((d or {}).get("status") or "") == "endOfSupport") + _eox_warn_count = len(eox_devices) - _eox_crit_count _model_count = len(top_models) - if _eol_models: + if eox_devices: + _lc_rating = "crit" if _eox_crit_count else "warn" + _lc_stat = f"{len(eox_devices)} lifecycle flag{'s' if len(eox_devices) != 1 else ''}" + _lc_detail = ", ".join( + f"{model} ({status})" for model, status in list(_eox_model_statuses.items())[:3] + ) or "EOX inventory flags present" + elif _eol_models: _lc_rating = "crit" _lc_stat = f"{len(_eol_models)} EOL model{'s' if len(_eol_models) != 1 else ''}" _lc_detail = ", ".join(_eol_models[:4]) + (" …" if len(_eol_models) > 4 else "") @@ -620,7 +940,6 @@ def _hcard(domain: str, rating: str, stat: str, detail: str) -> str: # ========================================================= # COVER PAGE # ========================================================= - _now = datetime.now() _report_date = _now.strftime("%B %d, %Y") _report_ts = _now.strftime("%B %d, %Y at %I:%M %p").replace(" 0", " ") cover_html = f""" @@ -647,8 +966,22 @@ def _hcard(domain: str, rating: str, stat: str, detail: str) -> str: # ========================================================= # TABLE OF CONTENTS PAGE # ========================================================= + def _toc_item(num: int, title: str, anchor: str, subitems: str = "") -> str: + return f""" +
  • + + {num} + {_he(title)} + + {subitems} +
  • + """ + + def _toc_sublist(items: str) -> str: + return f'
      {items}
    ' if items else "" + toc_site_items = "".join( - f'
  • {net_data["name"]}
  • ' + f'
  • {_he(net_data["name"])}
  • ' for net_data in sorted(devices_by_network.values(), key=lambda x: x["name"]) ) switch_deep_dive_html, toc_switch_items = _build_switch_detail_section( @@ -658,6 +991,61 @@ def _hcard(domain: str, rating: str, stat: str, detail: str) -> str: switch_port_configs_by_switch, poe_by_serial, port_issues_by_switch, + hardware_catalog, + ) + switch_deep_dive_is_appendix = len(toc_switch_items) > 12 + + def _build_switch_summary_for_main_report() -> str: + rows = [] + switch_devices = [ + d for d in devices_avail + if isinstance(d, dict) and d.get("productType") == "switch" and d.get("serial") + ] + for sw in sorted(switch_devices, key=lambda d: (str((d.get("network") or {}).get("name") or ""), str(d.get("name") or d.get("serial")))): + serial = sw.get("serial") + ports = switch_port_statuses_by_switch.get(serial) if isinstance(switch_port_statuses_by_switch, dict) else [] + configs = switch_port_configs_by_switch.get(serial) if isinstance(switch_port_configs_by_switch, dict) else [] + connected = sum(1 for p in ports if isinstance(p, dict) and str(p.get("status") or "").lower() == "connected") if isinstance(ports, list) else 0 + poe = poe_by_serial.get(serial, {}) if isinstance(poe_by_serial, dict) else {} + avg_w = poe.get("avgWatts") + issues = len(port_issues_by_switch.get(serial, [])) if isinstance(port_issues_by_switch, dict) else 0 + rows.append( + "" + f"{_he((sw.get('network') or {}).get('name') or network_names.get(sw.get('networkId'), 'Unassigned'))}" + f"{_he(sw.get('name') or serial)}
    {_he(serial)}" + f"{_model_cell(sw.get('model'))}" + f"{len(ports) if isinstance(ports, list) else '—'}" + f"{connected}" + f"{len(configs) if isinstance(configs, list) else '—'}" + f"{_he(f'{avg_w:.1f} W' if isinstance(avg_w, (int, float)) else '—')}" + f"{issues}" + "" + ) + return f""" +
    +

    16. Switch Deep Dive Summary

    +
    +
    Technical Appendix Moved To Backup Settings
    +
    + This organization has {len(switch_devices)} switch(es), so the full per-port + appendix is intentionally kept in the companion Backup Settings Report. + The main report keeps the operational read concise while preserving complete port-level + evidence, VLAN mode, PoE, LLDP/CDP, and neighbor detail in the backup packet. +
    +
    + + + + + {''.join(rows) if rows else ''} +
    SiteSwitchModelPortsConnectedConfigsPoE AvgIssues
    No switch inventory was present.
    +
    + """ + + switch_main_report_html = ( + _build_switch_summary_for_main_report() + if switch_deep_dive_is_appendix + else switch_deep_dive_html ) ap_interference_html = _build_ap_interference_section( devices_by_network, @@ -673,84 +1061,59 @@ def _hcard(domain: str, rating: str, stat: str, detail: str) -> str: devices_avail, networks_by_id, ) + addressing_dhcp_html = _build_addressing_dhcp_section( + networks, + appliance_vlans, + appliance_dhcp_subnets, + client_records, + devices_avail, + ) toc_switch_subitems = "".join( f'
  • {_he(label)}
  • ' for anchor, label in toc_switch_items ) + toc_entries = [ + (1, "Executive Summary", "executive-summary", ""), + ("Guide", "How to Use This Report", "report-guide", ""), + (2, "Network Overview", "network-overview", ""), + (3, "Network Topology", "network-topology", _toc_sublist(toc_site_items)), + (4, "Traffic Flows & Bottleneck Analysis", "traffic-flows", ""), + (5, "Device Health & Issues", "device-health", ""), + (6, "PoE Power Analysis", "poe-analysis", ""), + (7, "Security Baseline", "security-baseline", ""), + (8, "Recommendations & Implementation Plan", "recommendations", ""), + (9, "CIS 8 Controls Assessment", "cis8", ""), + (10, "Licensing Summary", "licensing", ""), + (11, "Configuration Backup Coverage", "config-coverage", ""), + (12, "Hardware Cost & Refresh Plan", "budget-forecast", ""), + (13, "Internet Capacity & Utilization", "wan-capacity", ""), + (14, "AP Interference Audit", "ap-interference", ""), + (15, "Client Analysis", "client-analysis", ""), + ( + 16, + "Switch Deep Dive Summary" if switch_deep_dive_is_appendix else "Switch Deep Dive", + "switch-deep-dive", + "" if switch_deep_dive_is_appendix else _toc_sublist(toc_switch_subitems), + ), + (17, "UniFi Comparison & Refresh Planning", "unifi-comparison", ""), + (18, "K-12 VLAN Segmentation Reference", "vlan-reference", ""), + ] + backup_toc_entries = [ + (1, "Backup Packet Guide", "backup-packet-guide", ""), + (2, "Configuration Backup Coverage", "config-coverage", ""), + (3, "Network Overview & Addressing", "network-overview", ""), + (4, "Security Baseline & MX Policy", "security-baseline", ""), + (5, "Licensing Summary", "licensing", ""), + (6, "Client Attachment Snapshot", "client-analysis", ""), + (7, "Switch Port Appendix", "switch-deep-dive", _toc_sublist(toc_switch_subitems)), + ] + toc_items_html = "".join(_toc_item(*entry) for entry in toc_entries) + backup_toc_items_html = "".join(_toc_item(*entry) for entry in backup_toc_entries) toc_html = f"""
    Table of Contents
      -
    1. - 1 - Executive Summary -
    2. -
    3. - 2 - Network Overview -
    4. -
    5. - 3 - Network Topology -
        - {toc_site_items} -
      -
    6. -
    7. - 4 - Traffic Flows & Bottleneck Analysis -
    8. -
    9. - 5 - Device Health & Issues -
    10. -
    11. - 6 - PoE Power Analysis -
    12. -
    13. - 7 - Security Baseline -
    14. -
    15. - 8 - Recommendations & Implementation Plan -
    16. -
    17. - 9 - CIS 8 Controls Assessment -
    18. -
    19. - 10 - Licensing Summary -
    20. -
    21. - 11 - Configuration Backup Coverage -
    22. -
    23. - 12 - Hardware Cost & Refresh Plan -
    24. -
    25. - 13 - Internet Capacity & Utilization -
    26. -
    27. - 14 - AP Interference Audit -
    28. -
    29. - 15 - Client Analysis -
    30. -
    31. - 16 - Switch Deep Dive -
        - {toc_switch_subitems} -
      -
    32. + {toc_items_html}
    """ @@ -759,76 +1122,76 @@ def _hcard(domain: str, rating: str, stat: str, detail: str) -> str:
    Table of Contents
      -
    1. - 1 - Network Overview -
    2. -
    3. - 2 - Network Topology -
        - {toc_site_items} -
      -
    4. -
    5. - 3 - Traffic Flows & Bottleneck Analysis -
    6. -
    7. - 4 - Device Health & Issues -
    8. -
    9. - 5 - PoE Power Analysis -
    10. -
    11. - 6 - Security Baseline -
    12. -
    13. - 7 - Recommendations & Implementation Plan -
    14. -
    15. - 8 - CIS 8 Controls Assessment -
    16. -
    17. - 9 - Licensing Summary -
    18. -
    19. - 10 - Configuration Backup Coverage -
    20. -
    21. - 11 - Hardware Cost & Refresh Plan -
    22. -
    23. - 12 - Internet Capacity & Utilization -
    24. -
    25. - 13 - AP Interference Audit -
    26. -
    27. - 14 - Client Analysis -
    28. -
    29. - 15 - Switch Deep Dive -
        - {toc_switch_subitems} -
      -
    30. + {backup_toc_items_html}
    """ + complete_report_name = _dated_report_name(org_name, "Complete", _now, "pdf") + executive_report_name = _dated_report_name(org_name, "Executive_Summary", _now, "pdf") + backup_report_name = _dated_report_name(org_name, "Backup_Settings", _now, "pdf") + + report_guide_html = f""" +
    +

    How to Use This Report

    +

    This report package is intentionally split by audience. The complete report provides the assessment narrative and evidence path, while companion reports keep leadership review and raw configuration backup material separate.

    +
    +
    +
    Fast Read
    +
    Executive Summary
    + +
    +
    +
    Decision Path
    +
    Sections 1, 7, 8, 12
    +
    Health, security posture, priorities, and refresh planning.
    +
    +
    +
    Backup Evidence
    +
    Backup Settings
    + +
    +
    +
    Full Context
    +
    Complete Report
    + +
    +
    + + + + + + + + +
    ReaderStart HereWhy
    Leadership / FinanceExecutive Summary, Recommendations, Hardware Cost & Refresh PlanShows the largest risks, renewal/refresh pressure, and recommended timing without port-level detail.
    IT OperationsInventory, topology, client analysis, and switch summaryConnects device inventory, site layout, clients, and operational symptoms.
    Security / ComplianceSecurity Baseline, MX Firewall/Filtering Policy Backup, CIS 8 Controls, Configuration CoverageShows control posture and the exact backup evidence available for audit review.
    Implementation TeamBackup Settings ReportContains the detailed port/configuration appendix that supports remediation work.
    +
    + """ + + backup_intro_html = f""" +
    +

    1. Backup Packet Guide

    +
    +
    Purpose
    +
    + This companion report is the configuration and evidence packet. It keeps raw settings, + MX policy exports, addressing/DHCP, client attachment snapshots, and switch port detail + together so the main assessment can stay focused on conclusions and recommended action. +
    +
    + + + + + + + + +
    Evidence AreaWhere It AppearsUse
    API artifact coverageConfiguration Backup CoverageConfirms which JSON backup files are present or not applicable.
    VLAN, subnet, DHCPNetwork Overview & AddressingDocuments MX interface subnets, relay/server mode, and DHCP utilization.
    Firewall and filteringSecurity Baseline & MX PolicyPrintable L3/L7, NAT, content filtering, VPN, group policy, and syslog snapshot.
    Switch portsSwitch Port AppendixFull per-port state, VLAN mode, PoE draw, LLDP/CDP neighbor, and issue flags.
    +
    + """ + # ========================================================= # SECTION 1: EXECUTIVE SUMMARY (fills its own page) # ========================================================= @@ -981,6 +1344,201 @@ def _hcard(domain: str, rating: str, stat: str, detail: str) -> str: ) ) + def _count_records(value: Any) -> int: + if isinstance(value, list): + return len(value) + if isinstance(value, dict): + total = 0 + for item in value.values(): + if isinstance(item, list): + total += len(item) + elif isinstance(item, dict): + total += _count_records(item) + elif item: + total += 1 + return total + return 0 + + def _exec_site_rows() -> str: + rows = [] + for net_data in sorted( + devices_by_network.values(), + key=lambda item: ( + -sum(1 for d in item.get("devices", []) if isinstance(d, dict) and d.get("status") != "online"), + item.get("name", ""), + ), + ): + devices = [d for d in net_data.get("devices", []) if isinstance(d, dict)] + site_total = len(devices) + site_online = sum(1 for d in devices if d.get("status") == "online") + site_alerting = sum(1 for d in devices if d.get("status") == "alerting") + site_offline = sum(1 for d in devices if d.get("status") in ("offline", "dormant")) + site_switches = sum(1 for d in devices if d.get("productType") == "switch") + site_aps = sum(1 for d in devices if d.get("productType") == "wireless") + site_mx = sum(1 for d in devices if d.get("productType") == "appliance") + site_pct = round(100 * site_online / max(site_total, 1)) if site_total else 0 + rows.append( + "" + f"{_he(net_data.get('name', 'Unassigned'))}" + f"{site_total}" + f"{site_online} / {site_total} ({site_pct}%)" + f"{site_offline}" + f"{site_alerting}" + f"{site_mx} MX · {site_switches} MS · {site_aps} MR" + "" + ) + return "".join(rows) or 'No site-level device data available.' + + _eox_counts: Dict[str, int] = {} + if isinstance(inventory_devices, list): + for _device in inventory_devices: + if not isinstance(_device, dict): + continue + _status = str((_device.get("eox") or {}).get("status") or "active") + _eox_counts[_status] = _eox_counts.get(_status, 0) + 1 + _eox_risk_total = sum(count for status, count in _eox_counts.items() if status and status != "active") + _eox_summary = ", ".join( + f"{_he(status)}: {count}" for status, count in sorted(_eox_counts.items()) if status != "active" + ) or "No EOL/EOS inventory flags" + + _exec_vlan_count = _count_records(appliance_vlans) + _exec_dhcp_count = _count_records(appliance_dhcp_subnets) + _exec_policy_count = _count_records(appliance_policy_backup) + _exec_switch_status_count = _count_records(switch_port_statuses_by_switch) + _exec_switch_config_count = _count_records(switch_port_configs_by_switch) + _exec_client_count = len(client_records) + + def _confidence_badge(label: str, ok: bool, detail: str) -> str: + cls = "badge-ok" if ok else "badge-warn" + return ( + "" + f"{_he(label)}" + f'{"High" if ok else "Partial"}' + f"{_he(detail)}" + "" + ) + + _data_confidence_html = "".join([ + _confidence_badge( + "Inventory and device status", + bool(total_devices and devices_avail), + f"{total_devices} device records with Dashboard availability status." + if devices_avail + else "Inventory is present, but Dashboard availability status was not captured.", + ), + _confidence_badge( + "Client attachment detail", + bool(network_clients), + f"{_exec_client_count} wired/wireless client attachment records from network_clients.json." + if network_clients + else ( + f"{_exec_client_count} legacy wireless client records; wired client visibility may be incomplete." + if wireless_clients + else "No client detail records were captured." + ), + ), + _confidence_badge( + "VLAN and DHCP evidence", + bool(_exec_vlan_count or _exec_dhcp_count), + f"{_exec_vlan_count} VLAN records and {_exec_dhcp_count} DHCP scope/utilization records." + if (_exec_vlan_count or _exec_dhcp_count) + else "No VLAN or DHCP scope telemetry was captured.", + ), + _confidence_badge( + "Firewall and filtering backup", + bool(appliance_policy_backup), + f"{_exec_policy_count} MX policy backup artifact group(s) captured." + if appliance_policy_backup + else "No MX firewall/content-filtering policy backup was captured.", + ), + _confidence_badge( + "WAN uplink evidence", + bool(uplink_statuses or appliance_uplinks_usage), + "WAN status and/or uplink usage artifacts are present." + if (uplink_statuses or appliance_uplinks_usage) + else "WAN uplink status and usage telemetry were not captured.", + ), + ]) + + _exec_price_models = pricing_payload.get("models") if isinstance(pricing_payload, dict) else {} + _exec_price_products = pricing_payload.get("products") if isinstance(pricing_payload, dict) else {} + _exec_unifi_map = pricing_payload.get("unifi_equivalents") if isinstance(pricing_payload, dict) else {} + + def _exec_match_prefix(model: str, mapping: Dict[str, Any]) -> str | None: + text = str(model or "").upper() + return next((key for key in sorted(mapping, key=len, reverse=True) if text.startswith(str(key).upper())), None) + + def _exec_product_key(entry: Any) -> str | None: + if isinstance(entry, dict): + value = entry.get("product_key") or entry.get("sku") + return str(value) if value else None + return None + + def _exec_product(product_key: str | None) -> Dict[str, Any]: + if not product_key or not isinstance(_exec_price_products, dict): + return {} + product = _exec_price_products.get(product_key) + return product if isinstance(product, dict) else {} + + def _exec_unit_price(model: str, product: Dict[str, Any]) -> int | float | None: + value = product.get("unit_cost") if isinstance(product, dict) else None + if isinstance(value, (int, float)): + return value + if not isinstance(_exec_price_models, dict): + return None + prefix = _exec_match_prefix(model, _exec_price_models) + data = _exec_price_models.get(model) or _exec_price_models.get(prefix or "") + if not isinstance(data, dict): + return None + value = data.get("unifi_unit_cost") + return value if isinstance(value, (int, float)) else None + + def _exec_care_price(product: Dict[str, Any]) -> int | None: + value = product.get("ui_care_5yr_unit_cost") if isinstance(product, dict) else None + return int(value) if isinstance(value, (int, float)) else None + + def _exec_money(value: int | float | None) -> str: + if not isinstance(value, (int, float)): + return "Pricing needed" + return f"${value:,.0f}" if float(value).is_integer() else f"${value:,.2f}" + + _exec_migration_qty = 0 + _exec_migration_excluded = 0 + _exec_migration_total = 0 + _exec_migration_care = 0 + _exec_migration_families: Dict[str, int] = {} + _exec_source_devices = devices_avail if isinstance(devices_avail, list) and devices_avail else inventory_devices + for _device in _exec_source_devices if isinstance(_exec_source_devices, list) else []: + if not isinstance(_device, dict): + continue + _model = str(_device.get("model") or _device.get("sku") or "").strip() + if not _model: + continue + _status = str(_device.get("status") or "unknown").lower() + if _status not in {"online", "alerting"}: + _exec_migration_excluded += 1 + continue + _map_key = _exec_match_prefix(_model, _exec_unifi_map) if isinstance(_exec_unifi_map, dict) else None + if not _map_key: + continue + _entry = _exec_unifi_map[_map_key] + _product = _exec_product(_exec_product_key(_entry)) + _unit = _exec_unit_price(_model, _product) + _care = _exec_care_price(_product) + _exec_migration_qty += 1 + _exec_migration_families[_model] = _exec_migration_families.get(_model, 0) + 1 + if isinstance(_unit, int): + _exec_migration_total += _unit + if isinstance(_care, int): + _exec_migration_care += _care + + _exec_migration_note = ( + f"{_exec_migration_qty} active/alerting mapped device(s) priced from the UniFi reference; " + f"{_exec_migration_excluded} dormant/offline/unknown device(s) excluded from the planning quote." + if _exec_migration_qty + else "No active/alerting devices matched the UniFi migration reference." + ) + exec_html = f"""

    1. Executive Summary

    @@ -1058,6 +1616,59 @@ def _hcard(domain: str, rating: str, stat: str, detail: str) -> str: +

    Site Health Snapshot

    + + + + + {_exec_site_rows()} +
    Site / NetworkDevicesOnlineDormant / OfflineAlertingDevice Mix
    + +

    Lifecycle, Licensing & Planning Snapshot

    + + + + + + + + + + + + + + + + + + + + + +
    AreaExecutive ReadPlanning Implication
    Lifecycle{_eox_risk_total} device(s) with EOL/EOS lifecycle flags. {_eox_summary}Use lifecycle status to prioritize refresh waves before expanding scope to healthy devices.
    Licensing{_lic_expired} expired license key(s); {_lic_active} active license record(s); {_he(_lic_mode or "unknown")} model.Resolve licensing exposure before relying on Dashboard visibility or security enforcement.
    Migration Budget{_exec_money(_exec_migration_total)} hardware planning total; {_exec_money(_exec_migration_care)} optional 5-year UI Care add-on.{_he(_exec_migration_note)}
    + +

    Backup Evidence Captured

    + + + + + + + + + + +
    Evidence AreaRecords CapturedWhere To Read It
    Switch port status / configs{_exec_switch_status_count} status · {_exec_switch_config_count} configBackup Settings Report, Switch Port Appendix
    VLANs and DHCP scopes{_exec_vlan_count} VLAN · {_exec_dhcp_count} DHCPComplete Report Section 2 and Backup Settings Report
    Firewall, filtering, group policy, VPN, syslog{_exec_policy_count}Complete Report Section 7 and Backup Settings Report
    Client attachment detail{_exec_client_count}Complete Report Section 15 and Backup Settings Report
    + +

    Data Confidence Snapshot

    + + + + + {_data_confidence_html} +
    Data AreaConfidenceInterpretation
    +

    Health at a Glance

    {health_grid_html} {render_kpi_row(kpi_items)} @@ -1203,6 +1814,7 @@ def _model_rollup(devs: list, max_items: int = 4) -> str: {"".join(lifecycle_rows) if lifecycle_rows else 'No EOX lifecycle data available in this backup.'} + {addressing_dhcp_html}

    Model Inventory & Capabilities

    @@ -1215,13 +1827,9 @@ def _model_rollup(devs: list, max_items: int = 4) -> str:
    -
    PoE Budget Note
    +
    PoE Budget Reference Coverage
    - Current backups include measured PoE consumption and per-port allocation signals, but - they do not yet include authoritative switch maximum PoE budget values. The report can - therefore show actual draw and PoE-heavy switches today, but budget headroom remains an - API collection gap that should be added to the backup pipeline before final capacity - planning or switch replacement decisions are made. + {_he(poe_budget_note)}
    @@ -1347,6 +1955,8 @@ def _model_rollup(devs: list, max_items: int = 4) -> str: # SECTION 4: TRAFFIC FLOWS & BOTTLENECK ANALYSIS # ========================================================= def _speed_num(s) -> int | None: + if not _is_low_speed_link(s): + return None try: return int(str(s).split()[0]) except (ValueError, IndexError): @@ -1612,7 +2222,7 @@ def _sw_sort_key(sw): sec += ( f"" f"{_he(_ap_name)}" - f"{_he(_ap_model)}" + f"{_model_cell(_ap_model)}" f'{_ap_status}' f'{_tot_util:.0f}%' f"{_tx_util:.0f}%" @@ -1655,7 +2265,7 @@ def _sw_sort_key(sw): sec += ( f"" f"{_he(_nm)}" - f"{_he(_mod)}" + f"{_model_cell(_mod)}" f'{_st}' f'{_tu:.0f}%' f"{_tx:.0f}%{_n80:.0f}%" @@ -1728,7 +2338,7 @@ def _sw_sort_key(sw): - + @@ -1738,13 +2348,14 @@ def _sw_sort_key(sw): err_display = ", ".join(issue["errors"]) if issue["errors"] else "—" issues_html += ( f"" - f"" + f"" + f"" f"" - f"" - f"" - f"" - f"" - f"" + f"" + f"" + f"" + f"" + f"" f"" ) issues_html += "
    Switch SerialPortErrorsSwitchSerialPortErrors SpeedDuplexPoE ModeStatus
    {issue['switch']}{_he(issue.get('switch_name') or issue['switch'])}{_he(issue['switch'])}{issue['port']}{err_display}{issue['speed']}{issue['duplex']}{issue['poeMode']}{issue['status']}{_he(err_display)}{_he(str(issue['speed']))}{_he(str(issue['duplex']))}{_he(str(issue['poeMode']))}{_he(str(issue['status']))}
    " @@ -1847,10 +2458,22 @@ def _sw_sort_key(sw): """ - for ssid in ssids[:20]: + hidden_default_count = 0 + rendered_count = 0 + for ssid in ssids: if not isinstance(ssid, dict): continue ssid_label = ssid.get("name") or f"SSID {ssid.get('number', '')}" + is_default_disabled = ( + not ssid.get("enabled") + and str(ssid_label).lower().startswith("unconfigured ssid") + ) + if is_default_disabled: + hidden_default_count += 1 + continue + if rendered_count >= 20: + continue + rendered_count += 1 issues_html += ( "" f"{_he(ssid_label)}" @@ -1863,13 +2486,22 @@ def _sw_sort_key(sw): f"{'Yes' if ssid.get('useVlanTagging') else 'No'}" "" ) + if hidden_default_count: + issues_html += ( + "" + f"{hidden_default_count} disabled default/unconfigured SSID slot(s) hidden." + "" + ) issues_html += "" if isinstance(wireless_mesh_statuses, dict) and wireless_mesh_statuses: mesh_notes = [] for net_id, payload in wireless_mesh_statuses.items(): if isinstance(payload, dict) and payload.get("error"): - mesh_notes.append(f"{network_names.get(net_id, net_id)}: {payload.get('error')}") + error_text = str(payload.get("error") or "") + if "No MR repeaters found" in error_text: + continue + mesh_notes.append(f"{network_names.get(net_id, net_id)}: {error_text}") if mesh_notes: issues_html += ( '
    ' @@ -1881,11 +2513,65 @@ def _sw_sort_key(sw): # Firmware upgrade history summary if isinstance(firmware_upgrades, list) and firmware_upgrades: + fw_status_by_key: Dict[tuple[str, str], List[str]] = {} fw_rows = [] fw_items = [] + + def _version_name(value: Any) -> str: + if isinstance(value, dict): + return str(value.get("shortName") or value.get("firmware") or "—") + if isinstance(value, str): + return value + return "—" + + def _infer_product(*versions: Any) -> str: + text = " ".join(_version_name(version) for version in versions).upper() + if "MX " in text: + return "appliance" + if "MS " in text or "CS " in text or "IOS XE" in text: + return "switch" + if "MR " in text: + return "wireless" + if "MV " in text: + return "camera" + if "MG " in text: + return "cellularGateway" + return "—" + for item in firmware_upgrades: if not isinstance(item, dict): continue + products = item.get("products") or {} + product_names = [name for name in ("appliance", "switch", "wireless") if products.get(name)] + if not product_names: + product_types = item.get("productTypes") or [] + if isinstance(product_types, list): + product_names = [str(product) for product in product_types] + current_version = item.get("currentVersion") or {} + current_name = _version_name(current_version) + target_version = (item.get("nextUpgrade") or {}).get("toVersion") or item.get("toVersion") or {} + available_versions = item.get("availableVersions") or [] + stable_versions = [ + version for version in available_versions + if isinstance(version, dict) and str(version.get("releaseType", "")).lower() == "stable" + ] + if not target_version and stable_versions: + target_version = stable_versions[0] + target_name = _version_name(target_version) + if not product_names: + inferred = _infer_product(current_version, target_version, item.get("fromVersion"), item.get("toVersion")) + product_names = [] if inferred == "—" else [inferred] + net_name = (item.get("network") or {}).get("name") or (item.get("network") or {}).get("id", "—") + if current_name != "—" or item.get("isUpgradeAvailable") or item.get("nextUpgrade"): + product_label = ", ".join(product_names) or _infer_product(current_version, target_version) + fw_status_by_key[(net_name, product_label)] = [ + net_name, + product_label, + current_name, + target_name, + "Yes" if item.get("isUpgradeAvailable") else "No", + str(item.get("upgradeStrategy") or "—"), + ] dt = _parse_dt(item.get("time", "")) if not dt and item.get("completedAt"): try: @@ -1894,6 +2580,12 @@ def _sw_sort_key(sw): dt = None fw_items.append((dt, item)) fw_items.sort(key=lambda x: x[0] or datetime.min, reverse=True) + fw_status_rows = sorted(fw_status_by_key.values(), key=lambda row: (row[0], row[1])) + if fw_status_rows: + issues_html += render_section( + "Firmware Status & Available Versions", + [["Network", "Product", "Current", "Dashboard Target / Stable", "Upgrade Available", "Strategy"]] + fw_status_rows, + ) for dt, item in fw_items[:12]: net = (item.get("network") or {}).get("name") or (item.get("network") or {}).get("id", "—") to_ver = (item.get("toVersion") or {}).get("shortName") or (item.get("toVersion") or {}).get("firmware", "—") @@ -1912,17 +2604,39 @@ def _sw_sort_key(sw): ) if eox_devices: - issues_html += render_section( - "End-of-Life / End-of-Support Inventory", - [["Device", "Model", "Network", "Status", "End of Sale", "End of Support"]] - + [[ - d.get("name", "—"), - d.get("model", "—"), - d.get("network", "—"), - d.get("status", "—"), - d.get("endOfSale", "—"), - d.get("endOfSupport", "—"), - ] for d in eox_devices[:20]], + eox_rows = [] + for device in eox_devices[:20]: + support_dt = _parse_dt(device.get("endOfSupport") or "") + row_class = "row-eos-announced" + if support_dt: + now_for_compare = _now + if support_dt.tzinfo and not now_for_compare.tzinfo: + now_for_compare = now_for_compare.replace(tzinfo=support_dt.tzinfo) + if support_dt <= now_for_compare + timedelta(days=730): + row_class = "row-eos-critical" + eox_rows.append( + "%s%s%s%s%s%s" + % ( + row_class, + _he(device.get("name", "—")), + _he(device.get("model", "—")), + _he(device.get("network", "—")), + _he(device.get("status", "—")), + _he(str(device.get("endOfSale") or "—")), + _he(str(device.get("endOfSupport") or "—")), + ) + ) + issues_html += ( + "

    End-of-Life / End-of-Support Inventory

    " + '' + "" + "" + + "".join(eox_rows) + + "
    DeviceModelNetworkStatusEnd of SaleEnd of Support
    " + '
    ' + 'Red End of support is within 2 years. ' + 'Yellow EOL/EOS has been announced but support is more than 2 years out or no support date was provided.' + "
    " ) # Alerts summary @@ -1943,7 +2657,10 @@ def _sw_sort_key(sw): "network": network_names.get(net_id, net_id), }) alert_items.sort(key=lambda x: x["dt"] or datetime.min, reverse=True) - recent = [a for a in alert_items if a["dt"] and a["dt"] >= datetime.now(tz=a["dt"].tzinfo) - timedelta(days=30)] + recent = [ + a for a in alert_items + if a["dt"] and a["dt"] >= _now.replace(tzinfo=a["dt"].tzinfo) - timedelta(days=30) + ] counts = Counter([a["type"] for a in recent]) if counts: issues_html += render_section( @@ -1962,7 +2679,17 @@ def _sw_sort_key(sw): ] for a in alert_items[:10]], ) - if not switch_port_issues and not config_issues and not high_util_devices: + has_issue_content = any( + [ + switch_port_issues, + config_issues, + high_util_devices, + eox_devices, + _lic_expired, + isinstance(alerts_history, dict) and any(alerts_history.values()), + ] + ) + if not has_issue_content: issues_html += ( '
    ' '
    No significant issues detected in the current data snapshot.
    ' @@ -1979,16 +2706,33 @@ def _sw_sort_key(sw):

    6. PoE Power Analysis

    """ if poe_switches: - poe_html += render_section( - "PoE Consumption by Switch (24 h average)", - [ + poe_switch_rows = [] + for s in poe_switches[:20]: + serial = s.get("serial", "") + device = device_by_serial.get(serial) or {} + model = str(device.get("model") or "") + budget = _known_poe_budget(model) + observed_watts = float(s.get("avgWatts", 0) or 0) + headroom = ( + f"{max(0.0, float(budget) - observed_watts):.1f} W" + if budget is not None + else "Unknown" + ) + switch_name = device.get("name") or model or serial + poe_switch_rows.append( [ - s.get("serial", ""), - f"{float(s.get('avgWatts', 0)):.1f} W", - f"{float(s.get('powerUsageInWh', 0)):.1f} Wh", + f"{switch_name} ({serial})" if switch_name != serial else serial, + model or "Unknown", + f"{observed_watts:.1f} W", + f"{budget:g} W" if budget is not None else "Unknown", + headroom, + f"{float(s.get('powerUsageInWh', 0) or 0):.1f} Wh", ] - for s in poe_switches[:20] - ], + ) + poe_html += render_section( + "PoE Consumption by Switch (24 h average)", + poe_switch_rows, + headers=["Switch", "Model", "Observed Avg", "Known Budget", "Headroom", "24 h Energy"], ) if poe_ports: poe_html += render_section( @@ -2088,13 +2832,15 @@ def _sw_sort_key(sw): "review this section after any major firmware or policy change." ) + appliance_policy_html = _build_appliance_policy_section(networks, appliance_policy_backup) + security_html = f"""

    7. Security & Compliance

    This section evaluates security posture from two angles: an appliance-level baseline - check (AMP, IDS/IPS, spoof protection, and internet exposure) and a CIS Controls - mapping in the following section. Together they form the security health layer of - this network audit.

    + check (AMP, IDS/IPS, spoof protection, and internet exposure), printable MX policy + backups, and a CIS Controls mapping in the following section. Together they form the + security health layer of this network audit.

    Security Posture Summary
    @@ -2102,11 +2848,6 @@ def _sw_sort_key(sw): {_sec_posture}

    Firewall & Internet Exposure: {_pf_note} -

    - Note: L3 inbound firewall rule detail requires a separate collection step - (GET /networks/{id}/appliance/firewall/inboundFirewallRules). - That data is not present in this backup set. Add it to the pipeline to surface - specific rule-level exposure in future reports.
    @@ -2123,6 +2864,7 @@ def _sw_sort_key(sw): {render_security_baseline(security_checks)} + {appliance_policy_html} """ @@ -2463,10 +3205,12 @@ def _sw_sort_key(sw): os_counts: Dict[str, int] = {} vlan_counts: Dict[str, int] = {} auth_counts: Dict[str, int] = {} + connection_counts: Dict[str, int] = {} + top_client_rows: list[list[str]] = [] rssi_buckets = {"Excellent (>-60)": 0, "Good (-60 to -70)": 0, "Fair (-70 to -80)": 0, "Poor (<-80)": 0} - for cl in wireless_clients: + for cl in client_records: ssid = cl.get("ssid") or "Unknown" ssid_counts[ssid] = ssid_counts.get(ssid, 0) + 1 @@ -2479,6 +3223,9 @@ def _sw_sort_key(sw): auth = cl.get("status") or cl.get("authType") or "Unknown" auth_counts[auth] = auth_counts.get(auth, 0) + 1 + connection = cl.get("recentDeviceConnection") or ("Wireless" if cl.get("ssid") else "Unknown") + connection_counts[connection] = connection_counts.get(connection, 0) + 1 + rssi = cl.get("rssi") if rssi is not None: try: @@ -2494,17 +3241,83 @@ def _sw_sort_key(sw): except (ValueError, TypeError): pass + def _usage_total_kb(client: Dict[str, Any]) -> float: + usage = client.get("usage") or {} + sent = usage.get("sent") if isinstance(usage, dict) else 0 + recv = usage.get("recv") if isinstance(usage, dict) else 0 + try: + return float(sent or 0) + float(recv or 0) + except (TypeError, ValueError): + return 0.0 + + for cl in sorted(client_records, key=_usage_total_kb, reverse=True)[:15]: + top_client_rows.append([ + cl.get("description") or cl.get("mac") or cl.get("id") or "Unknown", + cl.get("recentDeviceConnection") or ("Wireless" if cl.get("ssid") else "Unknown"), + cl.get("recentDeviceName") or cl.get("recentDeviceSerial") or "Unknown", + cl.get("ssid") or "—", + str(cl.get("vlan") or cl.get("namedVlan") or "—"), + _format_usage_kb(int(_usage_total_kb(cl))), + ]) + def _top_rows(d: Dict[str, int], limit: int = 10) -> str: rows = sorted(d.items(), key=lambda x: x[1], reverse=True)[:limit] - return "".join(f"{k}{v}" for k, v in rows) + return "".join(f"{_he(str(k))}{v}" for k, v in rows) rssi_rows = "".join( f"{bucket}{cnt}" for bucket, cnt in rssi_buckets.items() ) + overview_rows: list[list[str]] = [] + overview_totals = { + "clients": 0, + "heavy": 0, + "average_kb": 0, + "heavy_average_kb": 0, + "networks": 0, + } + if isinstance(clients_overview_raw, dict): + for net_id, overview in sorted(clients_overview_raw.items(), key=lambda item: network_names.get(item[0], item[0])): + if not isinstance(overview, dict) or overview.get("error"): + continue + counts = overview.get("counts") or {} + usages = overview.get("usages") or {} + total_clients = int(counts.get("total") or 0) + heavy_clients = int(counts.get("withHeavyUsage") or 0) + average_kb = int(usages.get("average") or 0) + heavy_average_kb = int(usages.get("withHeavyUsageAverage") or 0) + overview_totals["clients"] += total_clients + overview_totals["heavy"] += heavy_clients + overview_totals["average_kb"] += average_kb + overview_totals["heavy_average_kb"] += heavy_average_kb + overview_totals["networks"] += 1 + overview_rows.append([ + network_names.get(net_id, net_id), + str(total_clients), + str(heavy_clients), + _format_usage_kb(average_kb), + _format_usage_kb(heavy_average_kb), + ]) - if wireless_clients: + client_source = "network_clients.json" if network_clients else "wireless_clients.json" + if client_records: client_tables = f""" +
    +
    Source Data Coverage
    +
    + Client detail source: {_he(client_source)}. The preferred source is + network_clients.json from GET /networks/{{networkId}}/clients, + because it includes wired and wireless clients. Older backups may only include + wireless-only fallback data. +
    +
    + +

    Clients by Connection Type

    + + + {_top_rows(connection_counts)} +
    Connection TypeClient Count
    +

    Clients by SSID

    @@ -2528,19 +3341,50 @@ def _top_rows(d: Dict[str, int], limit: int = 10) -> str: {rssi_rows}
    SSIDClient Count
    RSSI RangeClient Count
    + +

    Top Clients by Usage

    + + + {''.join('' + ''.join(f'' for cell in row) + '' for row in top_client_rows)} +
    ClientConnectionRecent DeviceSSIDVLANUsage
    {_he(str(cell))}
    """ else: client_tables = """
    -
    No wireless client data available in this backup.
    +
    Source Data Coverage
    +
    + No client detail records were available in this backup. Current backups should collect + network_clients.json from GET /networks/{networkId}/clients. + Older backups may only have wireless_clients.json, which does not cover + wired clients and may be unavailable in current Dashboard API versions. +
    """ + if overview_rows: + average_usage = int(overview_totals["average_kb"] / max(overview_totals["networks"], 1)) + heavy_average_usage = int(overview_totals["heavy_average_kb"] / max(overview_totals["networks"], 1)) + client_tables += render_section( + "Client Overview Summary", + [ + ["Metric", "Value"], + ["Networks with overview data", str(overview_totals["networks"])], + ["Total clients", str(overview_totals["clients"])], + ["Heavy-usage clients", str(overview_totals["heavy"])], + ["Average usage per network", _format_usage_kb(average_usage)], + ["Average heavy-client usage per network", _format_usage_kb(heavy_average_usage)], + ], + ) + client_tables += render_section( + "Client Overview by Network", + [["Network", "Clients", "Heavy Usage", "Avg Usage", "Heavy Avg Usage"]] + overview_rows, + ) client_analysis_html = f"""

    15. Client Analysis

    -

    Analysis of {len(wireless_clients)} wireless client record(s) captured - in this backup. Wired client detail requires switch port client data which is not - collected in the current pipeline.

    +

    Analysis of {len(client_records)} client detail record(s) + and {overview_totals["networks"]} network overview record(s) captured + in this backup. Network client detail includes recent wired/wireless attachment, VLAN, + SSID where applicable, OS/device prediction, and usage when returned by the Meraki API.

    {client_tables}
    """ @@ -2548,109 +3392,463 @@ def _top_rows(d: Dict[str, int], limit: int = 10) -> str: # ========================================================= # SECTION 17: UNIFI COMPARISON & REFRESH PLANNING # ========================================================= - # Heuristic model mapping: Meraki family -> UniFi equivalent + indicative USD street price - # Prices are published MSRP / street estimates (2025–2026) and carry a planning-only disclaimer. - _UNIFI_MAP = { - # MX appliances -> UniFi Dream Machine / Cloud Gateway - "MX68": ("UDM SE", 1_299, 649), - "MX75": ("UCG-Ultra", 599, 299), - "MX85": ("UDM Pro Max", 1_999, 899), - "MX95": ("UDM Pro Max", 1_999, 899), - "MX105": ("UDM Pro SE", 1_499, 699), - "MX250": ("UCG-Enterprise", 3_999, 1_799), - "MX450": ("UCG-Enterprise", 3_999, 1_799), - # MS switches -> UniFi USW Pro / Aggregation - "MS120": ("USW Lite 16 PoE", 349, 179), - "MS125": ("USW Pro 24 PoE", 849, 549), - "MS210": ("USW Pro 24", 649, 399), - "MS220": ("USW Pro 24", 649, 399), - "MS225": ("USW Pro 24 PoE", 849, 549), - "MS250": ("USW Pro 48 PoE", 1_299, 799), - "MS320": ("USW Pro Aggregation", 999, 699), - "MS350": ("USW Enterprise 24 PoE", 1_299, 899), - "MS390": ("USW Enterprise 48 PoE", 1_799, 1_199), - "MS410": ("USW Aggregation", 799, 499), - "MS420": ("USW Aggregation", 799, 499), - "MS425": ("USW Pro Aggregation", 999, 699), - "MS450": ("USW Pro Aggregation", 999, 699), - # MR access points -> UniFi U6 / U7 series - "MR18": ("U6 Lite", 199, 109), - "MR20": ("U6 Lite", 199, 109), - "MR28": ("U6 Mesh", 199, 129), - "MR30": ("U6 LR", 299, 169), - "MR33": ("U6 LR", 299, 169), - "MR36": ("U6 Pro", 349, 189), - "MR42": ("U6 Pro", 349, 189), - "MR44": ("U7 Pro", 499, 299), - "MR46": ("U7 Pro", 499, 299), - "MR46E": ("U7 Pro Max", 699, 449), - "MR52": ("U7 Pro Max", 699, 449), - "MR55": ("U7 Pro Max", 699, 449), - "MR56": ("U7 Pro Max", 699, 449), - "MR57": ("U7 Pro Max", 699, 449), - "MR70": ("U6 Mesh", 199, 129), - "MR74": ("U6 Mesh Pro", 299, 179), - "MR76": ("U7 Outdoor", 499, 299), - "MR84": ("U7 Pro Max", 699, 449), - "MR86": ("U7 Outdoor", 499, 299), - } + # Equivalent mappings are maintained in reporting/reference/pricing_reference.json. + # Org-local pricing.json still wins, because reseller and E-rate pricing varies by client. + _UNIFI_MAP = pricing_payload.get("unifi_equivalents") if isinstance(pricing_payload, dict) else {} + _PRICE_MODELS = pricing_payload.get("models") if isinstance(pricing_payload, dict) else {} + _PRICE_PRODUCTS = pricing_payload.get("products") if isinstance(pricing_payload, dict) else {} + _PRICE_META = pricing_payload.get("meta") if isinstance(pricing_payload, dict) else {} + _PRICE_UPDATED = str((_PRICE_META or {}).get("updated") or REPORT_VERSION) + _PRICE_CURRENCY = str((_PRICE_META or {}).get("currency") or "USD") + + def _match_prefix(model: str, mapping: Dict[str, Any]) -> str | None: + text = str(model or "").upper() + return next((key for key in sorted(mapping, key=len, reverse=True) if text.startswith(str(key).upper())), None) + + def _price_model_data(model: str) -> Dict[str, Any]: + if not isinstance(_PRICE_MODELS, dict): + return {} + exact = _PRICE_MODELS.get(model) + prefix_key = _match_prefix(model, _PRICE_MODELS) + data = exact if isinstance(exact, dict) else _PRICE_MODELS.get(prefix_key or "") + if not isinstance(data, dict): + return {} + return data + + def _unit_price(model: str, field: str) -> int | float | None: + data = _price_model_data(model) + if not data: + return None + value = data.get(field) + return value if isinstance(value, (int, float)) else None + + def _money(value: int | float | None) -> str: + if not isinstance(value, (int, float)): + return "Pricing needed" + return f"${value:,.0f}" if float(value).is_integer() else f"${value:,.2f}" + + def _is_number(value: Any) -> bool: + return isinstance(value, (int, float)) and not isinstance(value, bool) + + def _price_confidence_badge(label: str) -> str: + normalized = str(label or "Reference").strip() + css = "badge-info" + if normalized.lower().startswith("used"): + css = "badge-warn" + elif normalized.lower().startswith("quote"): + css = "badge-fail" + elif normalized.lower().startswith("client"): + css = "badge-ok" + return f'{_he(normalized)}' + + def _product(product_key: str | None) -> Dict[str, Any]: + if not product_key or not isinstance(_PRICE_PRODUCTS, dict): + return {} + data = _PRICE_PRODUCTS.get(product_key) + return data if isinstance(data, dict) else {} + + def _entry_product_key(entry: Any) -> str | None: + if isinstance(entry, dict): + value = entry.get("product_key") or entry.get("sku") + return str(value) if value else None + return None + + def _entry_label(entry: Any, product: Dict[str, Any]) -> str: + if isinstance(entry, dict): + for key in ("name", "label", "equivalent"): + if entry.get(key): + return str(entry[key]) + if product: + return str(product.get("name") or product.get("sku") or "UniFi equivalent") + return str(entry or "UniFi equivalent") + + def _product_unit_cost(product: Dict[str, Any], fallback: int | float | None = None) -> int | float | None: + value = product.get("unit_cost") if isinstance(product, dict) else None + return value if isinstance(value, (int, float)) else fallback + + def _product_care_cost(product: Dict[str, Any]) -> int | None: + value = product.get("ui_care_5yr_unit_cost") if isinstance(product, dict) else None + return int(value) if isinstance(value, (int, float)) else None + + def _product_cyber_cost(product: Dict[str, Any]) -> int | None: + value = product.get("cybersecure_annual_unit_cost") if isinstance(product, dict) else None + return int(value) if isinstance(value, (int, float)) else None + + def _product_source(product: Dict[str, Any]) -> str: + source = str(product.get("source_url") or "").strip() if isinstance(product, dict) else "" + label = str(product.get("source_label") or "").strip() if isinstance(product, dict) else "" + if not source: + return _he(label or "Reference") + return f'{_he(label or "Ubiquiti Store")}' + + def _product_price_confidence(product: Dict[str, Any]) -> str: + if not isinstance(product, dict): + return "Reference" + explicit = str(product.get("pricing_confidence") or "").strip() + if explicit: + return explicit + if str(product.get("category") or "") == "meraki_used": + return "Used-market" + if str(product.get("vendor") or "").lower() == "ubiquiti": + return "Public MSRP" + return "Reference" + + def _meraki_price_source(model: str) -> str: + data = _price_model_data(model) + source = str(data.get("meraki_unit_source") or "").strip() if data else "" + return source or "Quote needed" + + def _meraki_price_confidence(model: str) -> str: + source = _meraki_price_source(model).lower() + if "networktigers" in source or "used" in source: + return "Used-market" + if source == "quote needed": + return "Quote needed" + return "Client quote" + + def _model_counts_for_refresh() -> List[Dict[str, Any]]: + rows: Dict[str, Dict[str, Any]] = {} + source_devices = devices_avail if isinstance(devices_avail, list) and devices_avail else inventory_devices + production_statuses = {"online", "alerting"} + saw_model = False + for device in source_devices if isinstance(source_devices, list) else []: + if not isinstance(device, dict): + continue + model = str(device.get("model") or device.get("sku") or "").strip() + if not model: + continue + saw_model = True + status = str(device.get("status") or "unknown").strip().lower() + row = rows.setdefault( + model, + {"model": model, "inventory_qty": 0, "quoted_qty": 0, "excluded_qty": 0, "excluded_statuses": {}}, + ) + row["inventory_qty"] += 1 + if status in production_statuses: + row["quoted_qty"] += 1 + else: + row["excluded_qty"] += 1 + excluded = row["excluded_statuses"] + excluded[status or "unknown"] = excluded.get(status or "unknown", 0) + 1 + if not saw_model: + for model, count in top_models: + try: + qty = int(count) + except (TypeError, ValueError): + continue + rows[str(model)] = { + "model": str(model), + "inventory_qty": qty, + "quoted_qty": qty, + "excluded_qty": 0, + "excluded_statuses": {}, + } + return sorted(rows.values(), key=lambda item: (-int(item["quoted_qty"]), str(item["model"]))) + + def _excluded_status_text(row: Dict[str, Any]) -> str: + statuses = row.get("excluded_statuses") + if not isinstance(statuses, dict) or not statuses: + return "—" + return ", ".join(f"{_he(k)}: {v}" for k, v in sorted(statuses.items())) + + def _connected_sfp_summary() -> Tuple[int, int, int]: + total_sfp = 0 + connected_sfp = 0 + uplink_sfp = 0 + raw = switch_port_statuses_by_switch if isinstance(switch_port_statuses_by_switch, dict) else {} + for ports in raw.values(): + if not isinstance(ports, list): + continue + for port in ports: + if not isinstance(port, dict): + continue + port_id = str(port.get("portId") or "") + if not _is_sfp_like_port(port_id): + continue + total_sfp += 1 + if str(port.get("status") or "").lower() == "connected": + connected_sfp += 1 + if port.get("isUplink"): + uplink_sfp += 1 + return total_sfp, connected_sfp, uplink_sfp _unifi_rows = "" _meraki_total = 0 - _unifi_total = 0 - _eol_swap_meraki = 0 - _eol_swap_unifi = 0 - - for _model, _count in top_models: - _mprefix = str(_model).upper() - _map_key = next( - (k for k in _UNIFI_MAP if _mprefix.startswith(k)), - None, - ) + _unifi_total = 0 + _unifi_care_total = 0 + _unifi_cyber_annual_total = 0 + _priced_rows = 0 + _unifi_priced_rows = 0 + _mapped_rows = 0 + _mapped_quoted_qty = 0 + _eol_models_mapped: List[str] = [] + _catalog_models = _model_counts_for_refresh() + _inventory_refresh_qty = sum(int(row.get("inventory_qty") or 0) for row in _catalog_models) + _quoted_refresh_qty = sum(int(row.get("quoted_qty") or 0) for row in _catalog_models) + _excluded_refresh_qty = sum(int(row.get("excluded_qty") or 0) for row in _catalog_models) + _excluded_status_totals: Dict[str, int] = {} + for _row in _catalog_models: + for _status, _status_count in (_row.get("excluded_statuses") or {}).items(): + _excluded_status_totals[str(_status)] = _excluded_status_totals.get(str(_status), 0) + int(_status_count or 0) + _excluded_status_summary = ", ".join( + f"{_he(status)}: {count}" for status, count in sorted(_excluded_status_totals.items()) + ) or "none" + _category_totals: Dict[str, Dict[str, int]] = {} + + for _row in _catalog_models: + _model = str(_row.get("model") or "") + _count = int(_row.get("quoted_qty") or 0) + _inventory_count = int(_row.get("inventory_qty") or 0) + _excluded_count = int(_row.get("excluded_qty") or 0) + if _count <= 0: + continue + _model_text = str(_model) + _mprefix = _model_text.upper() + _map_key = _match_prefix(_model_text, _UNIFI_MAP) if not _map_key: continue - _unifi_name, _meraki_price, _unifi_price = _UNIFI_MAP[_map_key] + _mapped_rows += 1 + _mapped_quoted_qty += _count + _entry = _UNIFI_MAP[_map_key] + _product_key = _entry_product_key(_entry) + _product_data = _product(_product_key) + _product_category = str(_product_data.get("category") or "uncategorized") + _unifi_name = _entry_label(_entry, _product_data) + _rationale = str(_entry.get("rationale") or "") if isinstance(_entry, dict) else "" _is_eol = any(_mprefix.startswith(p) for p in _EOL_PREFIXES) - _row_mx = _meraki_price * _count - _row_ux = _unifi_price * _count - _meraki_total += _row_mx - _unifi_total += _row_ux + _meraki_price = _unit_price(_model_text, "meraki_unit_cost") + _meraki_source = _meraki_price_source(_model_text) if _is_number(_meraki_price) else "Quote needed" + _meraki_confidence = _meraki_price_confidence(_model_text) + _unifi_price = _product_unit_cost(_product_data, _unit_price(_model_text, "unifi_unit_cost")) + _unifi_confidence = _product_price_confidence(_product_data) if _is_number(_unifi_price) else "Quote needed" + _ui_care_price = _product_care_cost(_product_data) + _cyber_annual = _product_cyber_cost(_product_data) + _row_mx = _meraki_price * _count if _is_number(_meraki_price) else None + _row_ux = _unifi_price * _count if _is_number(_unifi_price) else None + _row_care = _ui_care_price * _count if _is_number(_ui_care_price) else None + _row_cyber = _cyber_annual * _count if _is_number(_cyber_annual) else None + if _is_number(_row_mx): + _meraki_total += _row_mx + if _is_number(_row_ux): + _unifi_total += _row_ux + _unifi_priced_rows += 1 + if _is_number(_row_care): + _unifi_care_total += _row_care + if _is_number(_row_cyber): + _unifi_cyber_annual_total += _row_cyber + _bucket = _category_totals.setdefault(_product_category, {"hardware": 0, "care": 0, "cyber": 0, "qty": 0}) + _bucket["qty"] += _count + if _is_number(_row_ux): + _bucket["hardware"] += _row_ux + if _is_number(_row_care): + _bucket["care"] += _row_care + if _is_number(_row_cyber): + _bucket["cyber"] += _row_cyber + if _is_number(_row_mx) and _is_number(_row_ux): + _priced_rows += 1 if _is_eol: - _eol_swap_meraki += _row_mx - _eol_swap_unifi += _row_ux + _eol_models_mapped.append(_model_text) _unifi_rows += ( f"" f"{_he(_model)}" + f"{_inventory_count}" f"{_count}" - f"{_he(_unifi_name)}" - f"${_meraki_price:,}" - f"${_unifi_price:,}" - f"${_row_mx:,}" - f"${_row_ux:,}" - f'{"⚠ EOL" if _is_eol else "—"}' + f"{_excluded_count}
    {_excluded_status_text(_row)}" + f"{_he(_unifi_name)}{f'
    {_he(_rationale)}' if _rationale else ''}" + f"{_money(_meraki_price)}" + f"{_he(_meraki_source)}
    {_price_confidence_badge(_meraki_confidence)}" + f"{_money(_unifi_price)}
    {_price_confidence_badge(_unifi_confidence)}" + f"{_money(_ui_care_price)}" + f"{_money(_row_mx)}" + f"{_money(_row_ux)}" + f"{_money(_row_care)}" + f'{"EOL" if _is_eol else "—"}' f"" ) - _savings = _meraki_total - _unifi_total - _savings_pct = round(100 * _savings / _meraki_total) if _meraki_total else 0 + _savings = _meraki_total - _unifi_total if _priced_rows else None + _savings_pct = round(100 * _savings / _meraki_total) if _priced_rows and _meraki_total else None + _sfp_total, _connected_sfp, _uplink_sfp = _connected_sfp_summary() + _aggregation_rows = "" + _aggregation_total = 0 + _aggregation_care_total = 0 + if _connected_sfp >= 9: + _agg_key = "USW-Pro-Aggregation" + _agg_qty = max(1, math.ceil(_connected_sfp / 28)) + _agg_reason = ( + f"{_connected_sfp} connected SFP/module ports were observed. " + "Use a 32-port aggregation switch as a planning reference for a main closet/core design." + ) + elif _connected_sfp > 0: + _agg_key = "USW-Aggregation" + _agg_qty = 1 + _agg_reason = ( + f"{_connected_sfp} connected SFP/module port(s) were observed. " + "An 8-port aggregation switch may be sufficient if the design stays small." + ) + else: + _agg_key = "" + _agg_qty = 0 + _agg_reason = "No connected SFP/module ports were observed in this backup." + if _agg_key: + _agg_product = _product(_agg_key) + _agg_unit = _product_unit_cost(_agg_product) + _agg_care = _product_care_cost(_agg_product) + _agg_total = _agg_unit * _agg_qty if _is_number(_agg_unit) else None + _agg_care_total = _agg_care * _agg_qty if _is_number(_agg_care) else None + if _is_number(_agg_total): + _aggregation_total += _agg_total + _category_totals.setdefault("aggregation", {"hardware": 0, "care": 0, "cyber": 0, "qty": 0})["hardware"] += _agg_total + _category_totals["aggregation"]["qty"] += _agg_qty + if _is_number(_agg_care_total): + _aggregation_care_total += _agg_care_total + _category_totals.setdefault("aggregation", {"hardware": 0, "care": 0, "cyber": 0, "qty": 0})["care"] += _agg_care_total + _aggregation_rows = ( + "" + f"{_he(_agg_product.get('name') or _agg_key)}" + f"{_agg_qty}" + f"{_money(_agg_unit)}" + f"{_money(_agg_care)}" + f"{_money(_agg_total)}" + f"{_money(_agg_care_total)}" + f"{_he(_agg_reason)}" + "" + ) + + def _catalog_table(category: str, title: str) -> str: + rows = [] + if not isinstance(_PRICE_PRODUCTS, dict): + return "" + for key, product in sorted(_PRICE_PRODUCTS.items(), key=lambda item: (str((item[1] or {}).get("category")), str((item[1] or {}).get("name")))): + if not isinstance(product, dict) or product.get("category") != category: + continue + care = _product_care_cost(product) + cyber = _product_cyber_cost(product) + adders = [] + if isinstance(care, int): + adders.append(f"UI Care 5-year {_money(care)}") + if isinstance(cyber, int): + adders.append(f"CyberSecure annual {_money(cyber)}") + rows.append( + "" + f"{_he(product.get('name') or key)}
    {_he(product.get('sku') or key)}" + f"{_money(_product_unit_cost(product))}" + f"{_price_confidence_badge(_product_price_confidence(product))}" + f"{_he(' · '.join(adders) or '—')}" + f"{_he(product.get('description') or '')}" + f"{_product_source(product)}" + "" + ) + if not rows: + return "" + return f""" +

    {_he(title)}

    + + + {''.join(rows)} +
    ProductUnitConfidenceSupport / ServicesPlanning NotesSource
    + """ + + _reference_catalog_html = ( + _catalog_table("meraki_used", "Cisco/Meraki Used-Market Reference") + + _catalog_table("access_point", "Access Point Reference") + + _catalog_table("switch", "Access Switch Reference") + + _catalog_table("aggregation", "Aggregation Reference") + + _catalog_table("gateway", "Gateway Reference") + ) + _unifi_grand_total = _unifi_total + _aggregation_total + _unifi_grand_care_total = _unifi_care_total + _aggregation_care_total + + def _phase_amount(*categories: str, field: str = "hardware") -> int: + return sum((_category_totals.get(category) or {}).get(field, 0) for category in categories) + + _year1_hw = _phase_amount("access_point") + _year1_care = _phase_amount("access_point", field="care") + _year2_hw = _phase_amount("switch", "aggregation") + _year2_care = _phase_amount("switch", "aggregation", field="care") + _year3_hw = _phase_amount("gateway") + _year3_care = _phase_amount("gateway", field="care") + _year3_cyber = _phase_amount("gateway", field="cyber") + _cost_breakdown_rows = ( + "" + "Wireless AP hardware" + f"{_money(_year1_hw) if _year1_hw else '—'}" + f"{_price_confidence_badge('Public MSRP') if _year1_hw else _price_confidence_badge('Quote needed')}" + "Mapped active/alerting APs only; excludes dormant/offline APs until field validation." + "" + "" + "Access switch hardware" + f"{_money(_phase_amount('switch')) if _phase_amount('switch') else '—'}" + f"{_price_confidence_badge('Public MSRP') if _phase_amount('switch') else _price_confidence_badge('Quote needed')}" + "Mapped active/alerting access switches; PoE and uplink design should be validated closet by closet." + "" + "" + "Aggregation hardware" + f"{_money(_aggregation_total) if _aggregation_total else '—'}" + f"{_price_confidence_badge('Public MSRP') if _aggregation_total else _price_confidence_badge('Quote needed')}" + "Included only when connected SFP/module usage suggests a main closet aggregation candidate." + "" + "" + "Gateway/security hardware" + f"{_money(_year3_hw) if _year3_hw else '—'}" + f"{_price_confidence_badge('Public MSRP') if _year3_hw else _price_confidence_badge('Quote needed')}" + "MX replacement is a planning placeholder until firewall, VPN, filtering, logging, and HA requirements are signed off." + "" + "" + "Optional support/services add-ons" + f"{_money(_unifi_grand_care_total + _unifi_cyber_annual_total) if (_unifi_grand_care_total + _unifi_cyber_annual_total) else '—'}" + f"{_price_confidence_badge('Public MSRP') if (_unifi_grand_care_total + _unifi_cyber_annual_total) else _price_confidence_badge('Quote needed')}" + "UI Care and CyberSecure are shown separately from hardware so support choices stay explicit." + "" + "" + "Not included" + "Pricing needed" + f"{_price_confidence_badge('Quote needed')}" + "Optics/transceivers, cabling, licensing renewal deltas, tax, freight, professional services, project contingency, and E-rate/reseller discounts." + "" + ) + _three_year_rows = ( + "" + "Year 1Wireless access refresh" + f"{_money(_year1_hw) if _year1_hw else '—'}" + f"{_money(_year1_care) if _year1_care else '—'}" + "Replace active APs first; leave dormant/offline APs out of the quote until validated." + "" + "" + "Year 2Access switching and aggregation" + f"{_money(_year2_hw) if _year2_hw else '—'}" + f"{_money(_year2_care) if _year2_care else '—'}" + "Move closets in controlled batches; include aggregation only when connected SFP/module use warrants it." + "" + "" + "Year 3Gateway/security migration and cleanup" + f"{_money(_year3_hw) if _year3_hw else '—'}" + f"{_money(_year3_care + _year3_cyber) if (_year3_care + _year3_cyber) else '—'}" + "Validate firewall, VPN, content filtering, logging, and security subscriptions before replacing MX edge services." + "" + ) if _unifi_rows: + _footer_meraki = _money(_meraki_total) if _priced_rows else "Pricing needed" + _footer_unifi = _money(_unifi_total) if _unifi_priced_rows else "Pricing needed" + _footer_delta = f"-{_savings_pct}%" if isinstance(_savings_pct, int) else "Pricing needed" _unifi_hw_table = f""" - - - + + + {_unifi_rows} - - - - + + + + +
    Meraki ModelQtyUniFi EquivalentMeraki Unit (est.)UniFi Unit (est.)Meraki TotalUniFi TotalFlagMeraki ModelInventory QtyQuoted QtyExcludedUniFi EquivalentMeraki UnitMeraki SourceUniFi UnitUI Care / UnitMeraki TotalUniFi TotalUI Care TotalFlag
    Hardware totals (mapped devices only)${_meraki_total:,}${_unifi_total:,}−{_savings_pct}%Hardware totals (active/alerting mapped rows only){_footer_meraki}{_footer_unifi}{_money(_unifi_care_total) if _unifi_care_total else "—"}{_footer_delta}
    """ @@ -2665,29 +3863,92 @@ def _top_rows(d: Dict[str, int], limit: int = 10) -> str: unifi_html = f"""

    17. UniFi Comparison & Refresh Planning

    -

    This section provides a heuristic cost comparison between the current Meraki - environment and a notional UniFi replacement. It is a planning estimate only — not - a procurement quote or a recommendation to replace. Prices are approximate 2025–2026 - street/MSRP estimates and will vary by reseller, volume, and configuration. - Always validate with current partner pricing before presenting externally.

    +

    This section maps current Meraki model families to UniFi replacement classes and + builds a first-pass migration bill of materials. It is a planning reference only, + not a procurement quote or a recommendation to replace. Built-in UniFi prices use + the maintained reporting/reference/pricing_reference.json catalog. + Cisco/Meraki prices are shown only when an explicit reference exists; NetworkTigers + entries are labeled NetworkTigers (used) because they are used-market + hardware references and exclude licensing, warranty, support, tax, freight, optics, + and implementation; + org-local pricing.json overrides should be used for reseller, E-rate, + or client-approved pricing.

    Planning Summary
    - Mapped devices: {len([r for r in top_models if any(str(r[0]).upper().startswith(k) for k in _UNIFI_MAP)])} model type(s) - · Meraki hardware estimate: ${_meraki_total:,} - · UniFi hardware estimate: ${_unifi_total:,} - · Estimated hardware delta: ${_savings:,} ({_savings_pct}% lower) - {f"· EOL devices (hardware only): Meraki ${_eol_swap_meraki:,} vs UniFi ${_eol_swap_unifi:,}" if _eol_swap_meraki else ""} + Mapped model families: {_mapped_rows} + · Inventory devices considered: {_inventory_refresh_qty} + · Active/alerting devices found: {_quoted_refresh_qty} + · Quoted mapped devices: {_mapped_quoted_qty} + · Excluded dormant/offline/unknown devices: {_excluded_refresh_qty} + · Excluded status mix: {_excluded_status_summary} + · UniFi priced rows: {_unifi_priced_rows} + · Reference updated: {_he(_PRICE_UPDATED)} + · Meraki hardware total: {_money(_meraki_total) if _priced_rows else "Pricing needed"} + · UniFi mapped hardware total: {_money(_unifi_total) if _unifi_priced_rows else "Pricing needed"} + · Optional aggregation hardware: {_money(_aggregation_total) if _aggregation_total else "—"} + · UniFi planning total: {_money(_unifi_grand_total) if _unifi_grand_total else "Pricing needed"} + · UI Care 5-year add-on: {_money(_unifi_grand_care_total) if _unifi_grand_care_total else "—"} + · CyberSecure annual add-on: {_money(_unifi_cyber_annual_total) if _unifi_cyber_annual_total else "—"} + · Hardware delta: {_money(_savings) + f" ({_savings_pct}% lower)" if _is_number(_savings) and isinstance(_savings_pct, int) else "Pricing needed"} + {f"· EOL mapped families: {_he(', '.join(_eol_models_mapped[:6]))}" if _eol_models_mapped else ""}

    - Note: Meraki hardware prices above do not include annual licensing (typically - $X–$Y per device per year for Enterprise tier). UniFi has no recurring per-device - subscription fees beyond optional UniFi OS Cloud (optional, ~$29/mo for remote management). + Currency: {_he(_PRICE_CURRENCY)}. Meraki pricing remains quote-dependent unless supplied + by org-local pricing.json. Validate all pricing, support terms, tax, freight, + optics, and professional services before using externally.
    {_unifi_hw_table} +

    Migration Cost Breakdown

    + + + + + {_cost_breakdown_rows} +
    Cost AreaPlanning AmountConfidenceNotes
    + +

    Three-Year Migration Budget View

    + + + + + {_three_year_rows} + + + + + + + + +
    PhaseScopeHardwareSupport / Services Add-onsPlanning Notes
    Three-year planning total{_money(_unifi_grand_total) if _unifi_grand_total else "Pricing needed"}{_money(_unifi_grand_care_total + _unifi_cyber_annual_total) if (_unifi_grand_care_total + _unifi_cyber_annual_total) else "—"}{_money(_unifi_grand_total + _unifi_grand_care_total + _unifi_cyber_annual_total) if _unifi_grand_total else "Pricing needed"}
    + +

    Aggregation / Main Closet Reference

    +
    +
    Observed SFP Footprint
    +
    + SFP/module ports observed: {_sfp_total} + · Connected SFP/module ports: {_connected_sfp} + · Uplink SFP/module ports: {_uplink_sfp} +
    + {_he(_agg_reason)} +
    +
    + + + + + {_aggregation_rows or ''} +
    CandidateQtyUnitUI Care / UnitTotalUI Care TotalReason
    No aggregation switch add-on suggested from observed SFP usage.
    + +

    Maintained UniFi Public Reference Catalog

    +

    The catalog below is kept in source control so migration calculations are repeatable + and auditable. It should be refreshed before client-facing procurement decisions.

    + {_reference_catalog_html} +

    Licensing & Support Model Comparison

    @@ -2698,8 +3959,8 @@ def _top_rows(d: Dict[str, int], limit: int = 10) -> str: - + @@ -2763,11 +4024,65 @@ def _top_rows(d: Dict[str, int], limit: int = 10) -> str: """ + vlan_reference_rows = [ + ("1", "Native / Management", "10.1.0.0/16", "Switch, MX, AP management; IT jump hosts", "IT-only management access; no user assignment"), + ("10", "Servers & Controllers", "10.10.0.0/16", "SIS, NVR, file shares, local controllers", "Allow approved staff/admin sources to specific services only"), + ("20", "Facilities / IoT", "10.20.0.0/16", "HVAC, PA, alarms, signage", "Outbound vendor/NTP/DNS only; block inbound and lateral movement"), + ("30", "Security Devices", "10.30.0.0/16", "Cameras and door access panels", "Permit NVR/control-plane flows; block general internet and user VLAN access"), + ("100", "Admin Staff", "10.100.0.0/16", "Admin SSID and office workstations", "Least-privilege LAN access; deny student and guest networks"), + ("110-180", "Teacher / Classroom Blocks", "10.110.0.0/16 - 10.180.0.0/16", "Teacher devices and classroom carts by building or role", "Local print/cast/mDNS where required; restrict server access to approved applications"), + ("200", "Voice / Collaboration", "10.200.0.0/16", "VoIP phones, PA speakers, room systems", "SIP/RTP to call control only; preserve EF/voice QoS"), + ("250", "Student / BYOD", "10.250.0.0/16", "Student SSID and unmanaged student devices", "Internet-only with content filtering; no internal LAN access"), + ("254", "Guest / Visitor", "10.254.0.0/16", "Guest SSID and captive portal users", "Internet-only; captive portal and rate limits"), + ("400", "Events / Special Use", "10.400.0.0/16", "Athletics, auditorium AV, temporary wireless", "Time-bound policy; block production VLANs except explicitly approved multicast"), + ] + vlan_reference_html = """ +
    +

    18. K-12 VLAN Segmentation Reference

    +

    This supplemental design is a reference blueprint for school network segmentation. It should be validated against the current Meraki Dashboard configuration, firewall policy, identity provider, print/casting needs, and building-by-building operational requirements before implementation.

    +
    Licensing model Mandatory annual per-device license (co-term or Enterprise Agreement). Devices enter limited mode without active license.No per-device license fees. Hardware purchased once. Optional cloud - management subscription (~$29/mo).No per-device network-device license fees. Hardware purchased once. + Optional cloud services should be priced from current Ubiquiti terms.
    Management platform
    + + + + + """ + "".join( + "" + f"" + f"" + f"" + f"" + f"" + "" + for vlan, name, subnet, devices, policy in vlan_reference_rows + ) + """ + +
    VLANName / PurposeReference SubnetTypical DevicesPolicy Intent
    {_he(vlan)}{_he(name)}{_he(subnet)}{_he(devices)}{_he(policy)}
    +
    +
    Dashboard Implementation Notes
    +
    + Map SSIDs to tagged VLANs, keep management unassigned to users, apply deny-by-default inter-VLAN firewall rules, and use group policies for guest, student, IoT, and event exceptions. Treat this as target architecture, not evidence of current compliance. +
    +
    +
    + """ + + end_report_html = f""" +
    +
    +

    End of Report

    +

    TM Meraki Baseline

    +

    Release {REPORT_VERSION}  •  Generated {_report_ts}

    +

    {_he(org_name)}

    +
    +
    + """ + full_body = ( cover_html + _schema_banner + toc_html + exec_html + + report_guide_html + network_overview_html + topology_html + traffic_html @@ -2782,30 +4097,24 @@ def _top_rows(d: Dict[str, int], limit: int = 10) -> str: + wan_capacity_html + ap_interference_html + client_analysis_html - + switch_deep_dive_html + + switch_main_report_html + unifi_html + + vlan_reference_html + + end_report_html ) - exec_body = cover_html + _schema_banner + exec_html + exec_body = cover_html + _schema_banner + exec_html + report_guide_html + end_report_html backup_body = ( cover_html + _schema_banner + toc_backup_html + + backup_intro_html + + config_coverage_html + network_overview_html - + topology_html - + traffic_html - + issues_html - + poe_html + security_html - + recommendations_html - + cis8_html + licensing_html - + config_coverage_html - + budget_forecast_html - + wan_capacity_html - + ap_interference_html + client_analysis_html + switch_deep_dive_html - + unifi_html + + end_report_html ) if report_kind == "exec": @@ -2819,8 +4128,28 @@ def main(argv: list[str] | None = None) -> int: parser.add_argument("--source-dir", help="Generate reports from a single backup/fixture directory.") parser.add_argument("--org-name", help="Display name for --source-dir reports.") parser.add_argument("--output-dir", help="Directory for generated reports when using --source-dir.") + parser.add_argument( + "--reports-dir", + help=( + "Write multi-org report output under this reports directory instead of inside backups/. " + "Each org gets reports/// plus reports/latest// aliases." + ), + ) + parser.add_argument( + "--pdf-only", + action="store_true", + help="Remove generated HTML artifacts after PDF rendering succeeds.", + ) + parser.add_argument( + "--fixed-now", + type=_validate_fixed_now, + help="Use a fixed ISO timestamp for deterministic report filenames and visible report dates.", + ) args = parser.parse_args(argv) + if args.fixed_now: + os.environ[FIXED_NOW_ENV] = args.fixed_now + if args.source_dir: source_dir = os.path.abspath(args.source_dir) if not os.path.isdir(source_dir): @@ -2828,7 +4157,20 @@ def main(argv: list[str] | None = None) -> int: return 1 org_name = args.org_name or _read_org_name(source_dir) output_dir = os.path.abspath(args.output_dir) if args.output_dir else None - generated = generate_org_reports(source_dir, org_name, output_dir=output_dir) + latest_dir = None + if args.reports_dir and not output_dir: + reports_dir = os.path.abspath(args.reports_dir) + run_ts = _current_run_ts() + output_dir = _report_run_output_dir(reports_dir, org_name, run_ts) + latest_dir = _report_latest_output_dir(reports_dir, org_name) + generated = generate_org_reports( + source_dir, + org_name, + output_dir=output_dir, + latest_dir=latest_dir, + keep_html=not args.pdf_only, + run_ts=run_ts if args.reports_dir and not args.output_dir else None, + ) log.info("Done — %d report(s) generated.", generated) return 0 @@ -2840,7 +4182,22 @@ def main(argv: list[str] | None = None) -> int: generated = 0 for org_dir in org_dirs: - generated += generate_org_reports(org_dir, _read_org_name(org_dir)) + org_name = _read_org_name(org_dir) + output_dir = None + latest_dir = None + if args.reports_dir: + reports_dir = os.path.abspath(args.reports_dir) + run_ts = _current_run_ts() + output_dir = _report_run_output_dir(reports_dir, org_name, run_ts) + latest_dir = _report_latest_output_dir(reports_dir, org_name) + generated += generate_org_reports( + org_dir, + org_name, + output_dir=output_dir, + latest_dir=latest_dir, + keep_html=not args.pdf_only, + run_ts=run_ts if args.reports_dir else None, + ) log.info("Done — %d report(s) generated.", generated) return 0 diff --git a/reporting/common.py b/reporting/common.py index 53bad7e..ea2c157 100644 --- a/reporting/common.py +++ b/reporting/common.py @@ -11,7 +11,7 @@ BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) BACKUPS_DIR = os.path.join(BASE_DIR, "backups") -REPORT_VERSION = "1.0" +REPORT_VERSION = "2026_5_3" # Must match BACKUP_SCHEMA_VERSION in meraki_backup.py. # Increment here when report_generator.py adds new required fields/files. @@ -150,14 +150,21 @@ def md_to_html(md_text: str) -> str: return "\n".join(html_lines) -def render_section(title: str, rows: List[List[str]]) -> str: +def render_section(title: str, rows: List[List[str]], headers: List[str] | None = None) -> str: if not rows: return "" header = f"

    {_he(title)}

    " + table_head = "" + if headers: + table_head = ( + "" + + "".join(f"{_he(str(c))}" for c in headers) + + "" + ) table_rows = "".join( "" + "".join(f"{_he(str(c))}" for c in r) + "" for r in rows ) - return f'{header}{table_rows}
    ' + return f'{header}{table_head}{table_rows}
    ' def render_kpi_row(items: List[Tuple[str, str]]) -> str: @@ -469,22 +476,24 @@ def _port_heat_label(score: float) -> str: def _speed_label(speed: str) -> str: - if speed.startswith("10 "): + speed_text = str(speed or "").strip().lower() + if speed_text.startswith("10 mb"): return "10M" - if speed.startswith("100 "): + if speed_text.startswith("100 mb"): return "100M" - if speed.startswith("2.5 "): + if speed_text.startswith("2.5 g"): return "2.5G" - if speed.startswith("5 "): + if speed_text.startswith("5 g"): return "5G" - if speed.startswith("10 G"): + if speed_text.startswith("10 g"): return "10G" - if speed.startswith("25 G"): + if speed_text.startswith("25 g"): return "25G" + if speed_text.startswith("100 g"): + return "100G" return "1G" def _is_sfp_like_port(port_id: str) -> bool: text = str(port_id or "").upper() return "_" in text or text.startswith("SFP") or "NM" in text or text.startswith("X") - diff --git a/reporting/html_shell.py b/reporting/html_shell.py index b9f0159..44fe84d 100644 --- a/reporting/html_shell.py +++ b/reporting/html_shell.py @@ -2,6 +2,7 @@ import os import shutil import subprocess +import sys from .common import REPORT_VERSION @@ -33,8 +34,55 @@ def build_html(doc_title: str, body: str) -> str: margin: 0; }} @page {{ - margin: 18mm 12mm; + margin: 22mm 12mm 20mm; background: var(--olive-100); + @top-left {{ + content: "TM Meraki Baseline"; + color: #575d3d; + font-family: "Inter", system-ui, -apple-system, "Segoe UI", Helvetica, Arial, sans-serif; + font-size: 8px; + font-weight: 700; + letter-spacing: 0.12em; + text-transform: uppercase; + }} + @top-right {{ + content: "Release {REPORT_VERSION}"; + color: #78716c; + font-family: "Inter", system-ui, -apple-system, "Segoe UI", Helvetica, Arial, sans-serif; + font-size: 8px; + }} + @bottom-center {{ + content: "Page " counter(page) " of " counter(pages); + color: #78716c; + font-family: "Inter", system-ui, -apple-system, "Segoe UI", Helvetica, Arial, sans-serif; + font-size: 8px; + }} + }} + @page switch-detail {{ + size: A4 landscape; + margin: 10mm 8mm 8mm; + background: var(--olive-100); + @top-left {{ + content: "TM Meraki Baseline"; + color: #575d3d; + font-family: "Inter", system-ui, -apple-system, "Segoe UI", Helvetica, Arial, sans-serif; + font-size: 8px; + font-weight: 700; + letter-spacing: 0.12em; + text-transform: uppercase; + }} + @top-right {{ + content: "Release {REPORT_VERSION}"; + color: #78716c; + font-family: "Inter", system-ui, -apple-system, "Segoe UI", Helvetica, Arial, sans-serif; + font-size: 8px; + }} + @bottom-center {{ + content: "Page " counter(page) " of " counter(pages); + color: #78716c; + font-family: "Inter", system-ui, -apple-system, "Segoe UI", Helvetica, Arial, sans-serif; + font-size: 8px; + }} }} :root {{ --bg: #eef0e6; @@ -207,18 +255,18 @@ def build_html(doc_title: str, body: str) -> str: .toc-page {{ page-break-after: always; min-height: 241mm; - padding: 60px 72px; + padding: 44px 64px; display: flex; flex-direction: column; }} .toc-header {{ font-family: "Playfair Display", Georgia, "Times New Roman", serif; - font-size: 30px; + font-size: 28px; font-weight: 700; color: var(--olive-900); border-bottom: 2px solid var(--olive-400); - padding-bottom: 16px; - margin-bottom: 40px; + padding-bottom: 12px; + margin-bottom: 24px; }} .toc-list {{ list-style: none; @@ -227,22 +275,31 @@ def build_html(doc_title: str, body: str) -> str: counter-reset: none; }} .toc-list > li {{ - display: flex; - align-items: baseline; - gap: 14px; - padding: 11px 0; + display: block; + padding: 6px 0; border-bottom: 1px solid var(--line); - font-size: 13px; + font-size: 12px; }} .toc-list > li::before {{ display: none; }} + .toc-link {{ + display: flex; + align-items: baseline; + gap: 10px; + color: inherit; + text-decoration: none; + }} + .toc-link:hover .toc-entry {{ + color: var(--accent); + text-decoration: underline; + }} .toc-num {{ font-family: "Playfair Display", Georgia, "Times New Roman", serif; - font-size: 17px; + font-size: 14px; font-weight: 700; color: var(--olive-400); - min-width: 28px; + min-width: 24px; }} .toc-entry {{ color: var(--ink); @@ -250,13 +307,13 @@ def build_html(doc_title: str, body: str) -> str: }} .toc-sub {{ list-style: none; - margin: 8px 0 0 48px; + margin: 4px 0 0 34px; padding: 0; }} .toc-sub-item {{ - font-size: 13px; + font-size: 11px; color: var(--muted); - padding: 4px 0; + padding: 2px 0; border: none; }} .toc-sub-item a {{ @@ -398,107 +455,118 @@ def build_html(doc_title: str, body: str) -> str: }} }} .switch-detail-page {{ + page: switch-detail; page-break-before: always; max-width: none; + margin-left: 0; + margin-right: 0; + }} + .switch-detail-page h3 {{ + margin: 0 0 2px; + font-size: 15px; + line-height: 1.1; }} .switch-detail-kicker {{ - margin-top: -8px; + margin: 0 0 6px; color: var(--muted); - font-size: 12px; + font-size: 8px; + line-height: 1.15; }} .switch-detail-stats {{ display: grid; - grid-template-columns: repeat(3, minmax(0, 1fr)); - gap: 10px; - margin: 18px 0; + grid-template-columns: repeat(9, minmax(0, 1fr)); + gap: 3px; + margin: 5px 0; }} .switch-detail-stat {{ border: 1px solid var(--line); background: var(--stone-50); - border-radius: 10px; - padding: 12px 14px; + border-radius: 4px; + padding: 3px 4px; }} .switch-detail-stat .label {{ display: block; - font-size: 9px; + font-size: 5.5px; text-transform: uppercase; - letter-spacing: 0.16em; + letter-spacing: 0; color: var(--muted); - margin-bottom: 6px; + margin-bottom: 1px; font-weight: 600; }} .switch-detail-stat .value {{ display: block; - font-size: 12px; + font-size: 6.5px; color: var(--ink); - line-height: 1.45; + line-height: 1.05; word-break: break-word; }} .switch-detail-card {{ border: 1px solid var(--line); background: white; - border-radius: 12px; - padding: 16px 18px; - margin: 16px 0 18px; + border-radius: 4px; + padding: 5px 6px; + margin: 5px 0 6px; }} .switch-detail-narrative {{ - margin-bottom: 10px; + margin-bottom: 3px; color: var(--ink); + font-size: 7px; + line-height: 1.12; }} .switch-port-summary {{ display: flex; flex-wrap: wrap; - gap: 14px; - font-size: 11px; + gap: 5px; + font-size: 6.5px; color: var(--muted); - margin: 2px 0 12px; + margin: 1px 0 4px; }} .switch-port-group {{ - margin-top: 12px; + margin-top: 4px; }} .switch-port-group-title {{ - font-size: 10px; - letter-spacing: 0.14em; + font-size: 6px; + letter-spacing: 0; text-transform: uppercase; color: var(--muted); - margin-bottom: 7px; + margin-bottom: 2px; font-weight: 700; }} .switch-port-group-kind {{ - margin-left: 8px; - letter-spacing: 0.08em; + margin-left: 3px; + letter-spacing: 0; font-weight: 600; opacity: 0.7; }} .switch-port-face {{ border: 1px solid var(--line); - border-radius: 10px; + border-radius: 4px; background: #f8fafc; - padding: 10px; + padding: 3px; }} .switch-port-row {{ display: grid; grid-auto-flow: column; grid-auto-columns: minmax(0, 1fr); - gap: 6px; - margin-top: 6px; + gap: 2px; + margin-top: 2px; }} .switch-port-row:first-child {{ margin-top: 0; }} .switch-port-cell {{ - border-radius: 6px; - min-height: 40px; + border-radius: 3px; + min-height: 19px; display: flex; align-items: center; justify-content: center; flex-direction: column; - gap: 2px; - font-size: 10px; + gap: 1px; + font-size: 6px; font-weight: 700; border: 1px solid transparent; color: #1f2937; - padding: 4px 2px; + padding: 1px; text-align: center; }} .switch-port-num {{ @@ -507,7 +575,7 @@ def build_html(doc_title: str, body: str) -> str: }} .switch-port-meta {{ display: block; - font-size: 8px; + font-size: 4.8px; font-weight: 600; opacity: 0.78; line-height: 1.05; @@ -530,25 +598,25 @@ def build_html(doc_title: str, body: str) -> str: }} .switch-detail-grid-empty {{ color: var(--muted); - font-size: 12px; - padding: 8px 0 2px; + font-size: 7px; + padding: 3px 0 1px; }} .switch-detail-legend {{ display: flex; flex-wrap: wrap; - gap: 12px; - margin-top: 12px; - font-size: 11px; + gap: 5px; + margin-top: 4px; + font-size: 6px; color: var(--muted); }} .switch-detail-legend span {{ display: inline-flex; align-items: center; - gap: 5px; + gap: 2px; }} .switch-detail-legend .swatch {{ - width: 10px; - height: 10px; + width: 6px; + height: 6px; border-radius: 2px; display: inline-block; border: 1px solid rgba(15, 23, 42, 0.08); @@ -562,9 +630,79 @@ def build_html(doc_title: str, body: str) -> str: .switch-detail-legend .swatch.speed-mgig {{ background: #dbeafe; box-shadow: inset 0 0 0 2px rgba(14, 165, 233, 0.22); }} .switch-detail-legend .swatch.speed-uplink {{ background: #fed7aa; box-shadow: inset 0 0 0 2px rgba(234, 88, 12, 0.24); }} .switch-detail-legend .swatch.sfp {{ background: white; border-style: dashed; border-width: 2px; }} - .switch-detail-table td {{ + table.data.switch-detail-table td {{ vertical-align: top; }} + table.data.switch-detail-table {{ + table-layout: fixed; + width: 100%; + font-size: 4.2px; + line-height: 0.95; + margin-top: 2px; + border-radius: 2px; + }} + table.data.switch-detail-table th, + table.data.switch-detail-table td {{ + padding: 0.2px 0.6px; + font-size: 4.2px; + word-break: keep-all; + overflow-wrap: normal; + hyphens: none; + white-space: nowrap; + }} + table.data.switch-detail-table th {{ + font-size: 4px; + letter-spacing: 0; + text-transform: none; + }} + table.data.switch-detail-table td:nth-child(2), + table.data.switch-detail-table td:nth-child(8), + table.data.switch-detail-table td:nth-child(13) {{ + white-space: normal; + overflow-wrap: anywhere; + }} + table.data.switch-detail-table .badge {{ + font-size: 3.9px; + padding: 0 1px; + border-radius: 1px; + line-height: 1; + }} + .switch-detail-table .c-port {{ width: 3.2%; }} + .switch-detail-table .c-label {{ width: 10%; }} + .switch-detail-table .c-heat {{ width: 4.2%; }} + .switch-detail-table .c-role {{ width: 4%; }} + .switch-detail-table .c-status {{ width: 5%; }} + .switch-detail-table .c-speed {{ width: 4.6%; }} + .switch-detail-table .c-duplex {{ width: 3.7%; }} + .switch-detail-table .c-vlan {{ width: 14%; }} + .switch-detail-table .c-total {{ width: 6.5%; }} + .switch-detail-table .c-rate {{ width: 6%; }} + .switch-detail-table .c-power {{ width: 6%; }} + .switch-detail-table .c-flags {{ width: 5%; }} + .switch-detail-table .c-neighbor {{ width: 28%; }} + .row-eos-critical td {{ + background: #fee2e2; + }} + .row-eos-announced td {{ + background: #fef3c7; + }} + .end-report {{ + min-height: 210mm; + display: flex; + align-items: center; + justify-content: center; + text-align: center; + color: var(--olive-900); + }} + .end-report h2 {{ + font-family: "Playfair Display", Georgia, "Times New Roman", serif; + font-size: 34px; + margin: 0 0 10px; + }} + .end-report p {{ + color: var(--muted); + margin: 4px 0; + }} .wan-capacity-chart {{ margin: 16px 0 22px; display: grid; @@ -699,6 +837,9 @@ def build_html(doc_title: str, body: str) -> str: gap: 10px; margin: 16px 0 24px; }} + .report-guide-grid {{ + grid-template-columns: repeat(4, 1fr); + }} .kpi {{ border: 1px solid var(--line); background: var(--stone-50); @@ -732,6 +873,18 @@ def build_html(doc_title: str, body: str) -> str: color: var(--ink); display: block; }} + .kpi-note {{ + margin-top: 5px; + color: var(--muted); + font-size: 8px; + line-height: 1.25; + word-break: break-word; + }} + .kpi-note a {{ + color: var(--olive-700); + text-decoration: none; + font-weight: 600; + }} /* ===================================================== SUMMARY CARDS @@ -1147,15 +1300,41 @@ def build_html(doc_title: str, body: str) -> str: def write_pdf(html_path: str, pdf_path: str) -> bool: - # Try weasyprint first + # Run WeasyPrint out-of-process. Native font/Pango/Cairo crashes can + # otherwise terminate the whole report generator before fallback handling. try: - import weasyprint # type: ignore + import weasyprint # type: ignore # noqa: F401 log.info("Using WeasyPrint for PDF generation: %s", pdf_path) - weasyprint.HTML(filename=html_path).write_pdf(pdf_path) - return True + result = subprocess.run( + [ + sys.executable, + "-c", + ( + "import sys, weasyprint; " + "weasyprint.HTML(filename=sys.argv[1]).write_pdf(sys.argv[2])" + ), + html_path, + pdf_path, + ], + capture_output=True, + text=True, + timeout=240, + ) + if result.returncode == 0 and os.path.exists(pdf_path): + return True + detail = (result.stderr or result.stdout or "").strip() + if result.returncode < 0: + log.warning("WeasyPrint crashed with signal %s while rendering %s", -result.returncode, html_path) + else: + log.warning("WeasyPrint exited %d while rendering %s", result.returncode, html_path) + if detail: + log.warning("WeasyPrint output: %s", detail[:1000]) + except subprocess.TimeoutExpired: + log.warning("WeasyPrint timed out while rendering %s", html_path) except Exception as e: log.warning("WeasyPrint failed: %s", e) + # Fallback to wkhtmltopdf wk = shutil.which("wkhtmltopdf") if wk: diff --git a/reporting/reference/meraki_hardware_catalog.json b/reporting/reference/meraki_hardware_catalog.json new file mode 100644 index 0000000..8a251be --- /dev/null +++ b/reporting/reference/meraki_hardware_catalog.json @@ -0,0 +1,179 @@ +{ + "meta": { + "name": "Meraki hardware reference", + "updated": "2026-05-03", + "notes": [ + "PoE budgets are static hardware capabilities from Cisco Meraki product documentation.", + "Observed draw still comes from collected Meraki Dashboard API telemetry.", + "Unknown models should render as unknown rather than estimated." + ], + "sources": [ + { + "title": "MS225 Overview and Specifications", + "url": "https://documentation.meraki.com/MS/MS_Overview_and_Specifications/MS225_Overview_and_Specifications" + }, + { + "title": "MS120 Overview and Specifications", + "url": "https://documentation.meraki.com/MS/MS_Overview_and_Specifications/MS120_Overview_and_Specifications" + }, + { + "title": "PoE Support on MS Switches", + "url": "https://documentation.meraki.com/Switching/MS_-_Switches/Operate_and_Maintain/How-Tos/PoE_Support_on_MS_Switches" + }, + { + "title": "MS130 Datasheet", + "url": "https://documentation.meraki.com/Switching/MS_-_Switches/Product_Information/Overviews_and_Datasheets/MS130_Datasheet" + }, + { + "title": "MS210 Series Installation Guide", + "url": "https://documentation.meraki.com/MS/Install_and_Get_Started/Installation_Guides/MS210_Series_Installation_Guide" + }, + { + "title": "Catalyst 9300-M Datasheet", + "url": "https://documentation.meraki.com/Switching/Cloud_Management_with_IOS_XE/Product_Information/Overviews_and_Datasheets/Catalyst_9300-M_Datasheet" + } + ] + }, + "models": { + "MS120-8LP": { + "productType": "switch", + "poeBudgetWatts": 67, + "poePorts": 8, + "uplinkPorts": "2 SFP", + "source": "MS120 Overview and Specifications" + }, + "MS120-8FP": { + "productType": "switch", + "poeBudgetWatts": 124, + "poePorts": 8, + "uplinkPorts": "2 SFP", + "source": "MS120 Overview and Specifications" + }, + "MS120-24P": { + "productType": "switch", + "poeBudgetWatts": 370, + "poePorts": 24, + "uplinkPorts": "4 SFP", + "source": "MS120 Series Installation Guide" + }, + "MS120-48LP": { + "productType": "switch", + "poeBudgetWatts": 370, + "poePorts": 48, + "uplinkPorts": "4 SFP", + "source": "MS120 Series Installation Guide" + }, + "MS120-48FP": { + "productType": "switch", + "poeBudgetWatts": 740, + "poePorts": 48, + "uplinkPorts": "4 SFP", + "source": "MS120 Series Installation Guide" + }, + "MS225-24P": { + "productType": "switch", + "poeBudgetWatts": 370, + "poePorts": 24, + "uplinkPorts": "4 SFP+", + "source": "MS225 Overview and Specifications" + }, + "MS225-48LP": { + "productType": "switch", + "poeBudgetWatts": 370, + "poePorts": 48, + "uplinkPorts": "4 SFP+", + "source": "MS225 Series Installation Guide" + }, + "MS225-48FP": { + "productType": "switch", + "poeBudgetWatts": 740, + "poePorts": 48, + "uplinkPorts": "4 SFP+", + "source": "MS225 Overview and Specifications" + }, + "MS130-24P": { + "productType": "switch", + "poeBudgetWatts": 370, + "poePorts": 24, + "uplinkPorts": "4 SFP", + "source": "MS130 Datasheet" + }, + "MS130-24X": { + "productType": "switch", + "poeBudgetWatts": 370, + "poePorts": 24, + "uplinkPorts": "4 SFP+", + "source": "MS130 Datasheet" + }, + "MS130-48P": { + "productType": "switch", + "poeBudgetWatts": 740, + "poePorts": 48, + "uplinkPorts": "4 SFP", + "source": "MS130 Datasheet" + }, + "MS130-48X": { + "productType": "switch", + "poeBudgetWatts": 740, + "poePorts": 48, + "uplinkPorts": "4 SFP+", + "source": "MS130 Datasheet" + }, + "MS210-24P": { + "productType": "switch", + "poeBudgetWatts": 370, + "poePorts": 24, + "uplinkPorts": "4 SFP", + "source": "MS210 Series Installation Guide" + }, + "MS210-48LP": { + "productType": "switch", + "poeBudgetWatts": 370, + "poePorts": 48, + "uplinkPorts": "4 SFP", + "source": "MS210 Series Installation Guide" + }, + "MS210-48FP": { + "productType": "switch", + "poeBudgetWatts": 740, + "poePorts": 48, + "uplinkPorts": "4 SFP", + "source": "MS210 Series Installation Guide" + }, + "C9300-24P": { + "productType": "switch", + "poeBudgetWatts": 445, + "poePorts": 24, + "uplinkPorts": "Modular", + "source": "Catalyst 9300-M Datasheet" + }, + "C9300-24U": { + "productType": "switch", + "poeBudgetWatts": 830, + "poePorts": 24, + "uplinkPorts": "Modular", + "source": "Catalyst 9300-M Datasheet" + }, + "C9300-48P": { + "productType": "switch", + "poeBudgetWatts": 437, + "poePorts": 48, + "uplinkPorts": "Modular", + "source": "Catalyst 9300-M Datasheet" + }, + "C9300-48U": { + "productType": "switch", + "poeBudgetWatts": 822, + "poePorts": 48, + "uplinkPorts": "Modular", + "source": "Catalyst 9300-M Datasheet" + }, + "C9300-48UXM": { + "productType": "switch", + "poeBudgetWatts": 490, + "poePorts": 48, + "uplinkPorts": "Modular", + "source": "Catalyst 9300-M Datasheet" + } + } +} diff --git a/reporting/reference/pricing_reference.json b/reporting/reference/pricing_reference.json new file mode 100644 index 0000000..f213af0 --- /dev/null +++ b/reporting/reference/pricing_reference.json @@ -0,0 +1,340 @@ +{ + "meta": { + "name": "Report pricing and replacement reference", + "updated": "2026-05-03", + "currency": "USD", + "notes": [ + "Equivalent mappings are planning references only.", + "UniFi unit pricing is public Ubiquiti US Store pricing observed on the updated date unless a source is noted otherwise.", + "Cisco/Meraki unit pricing is included only when explicitly labeled as a planning reference. NetworkTigers entries are used-market hardware references, not Cisco Meraki list price, new hardware quote, licensing, tax, freight, or support pricing.", + "Provide org-local pricing.json to override any model mapping, unit cost, support cost, or equivalent selection." + ] + }, + "products": { + "NT-C9300-48UXM-E-USED": { + "vendor": "Cisco", + "category": "meraki_used", + "name": "Cisco C9300-48UXM-E Catalyst 9300 48-port mGig UPoE switch", + "sku": "C9300-48UXM-E", + "unit_cost": 1516.99, + "condition": "used", + "source_label": "NetworkTigers (used)", + "description": "Used-market reference for Catalyst 9300 48-port 2.5 GbE / mGig UPoE RJ45 switch. Planning reference only; verify licensing, DNA/Meraki persona support, warranty, optics, and support eligibility before quoting." + }, + "NT-C9200L-48P-4X-E-USED": { + "vendor": "Cisco", + "category": "meraki_used", + "name": "Cisco C9200L-48P-4X-E Catalyst 9200L 48-port PoE+ switch", + "sku": "C9200L-48P-4X-E", + "unit_cost": 2645.99, + "condition": "used", + "source_label": "NetworkTigers (used)", + "description": "Used-market reference for Catalyst 9200L 48-port 1 GbE PoE+ RJ45 switch with 4 10 GbE SFP+ uplinks. Planning reference only; verify licensing, warranty, optics, and support eligibility before quoting." + }, + "NT-MR46-HW-USED": { + "vendor": "Cisco Meraki", + "category": "meraki_used", + "name": "Cisco Meraki MR46-HW Wi-Fi 6 access point with wall mount", + "sku": "MR46-HW", + "unit_cost": 239.99, + "condition": "used", + "source_label": "NetworkTigers (used)", + "description": "Used-market reference for Meraki MR46-HW quad-radio 4x4:4 802.11ax access point with wall mount. Planning reference only; does not include Meraki licensing, warranty, support, tax, or freight." + }, + "U7-Pro": { + "vendor": "Ubiquiti", + "category": "access_point", + "name": "U7 Pro", + "sku": "U7-Pro", + "unit_cost": 189, + "ui_care_5yr_unit_cost": 30, + "description": "Ceiling-mounted WiFi 7 AP with 6 spatial streams and 6 GHz support.", + "source_url": "https://store.ui.com/us/en/category/wifi-flagship/products/u7-pro" + }, + "U7-LR": { + "vendor": "Ubiquiti", + "category": "access_point", + "name": "U7 Long-Range", + "sku": "U7-LR", + "unit_cost": 159, + "ui_care_5yr_unit_cost": 39, + "description": "Compact ceiling-mount WiFi 7 AP with 5 spatial streams and extended signal range.", + "source_url": "https://store.ui.com/us/en/products/u7-lr" + }, + "USW-Pro-24-POE": { + "vendor": "Ubiquiti", + "category": "switch", + "name": "Pro 24 PoE", + "sku": "USW-Pro-24-POE", + "unit_cost": 699, + "ui_care_5yr_unit_cost": 125, + "poe_budget_watts": 400, + "description": "24-port Layer 3 switch capable of high-power PoE++ output.", + "source_url": "https://store.ui.com/us/en/category/all-switching/products/usw-pro-24-poe" + }, + "USW-Pro-48-POE": { + "vendor": "Ubiquiti", + "category": "switch", + "name": "Pro 48 PoE", + "sku": "USW-Pro-48-POE", + "unit_cost": 1099, + "ui_care_5yr_unit_cost": 199, + "poe_budget_watts": 600, + "description": "48-port Layer 3 switch capable of high-power PoE++ output.", + "source_url": "https://store.ui.com/us/en/category/all-switching/products/usw-pro-48-poe" + }, + "USW-Pro-XG-24-PoE": { + "vendor": "Ubiquiti", + "category": "switch", + "name": "Pro XG 24 PoE", + "sku": "USW-Pro-XG-24-PoE", + "unit_cost": 1799, + "ui_care_5yr_unit_cost": 359, + "poe_budget_watts": 720, + "description": "24-port Layer 3 Etherlighting PoE+++ switch with 16 10 GbE, 8 2.5 GbE, and 2 25G SFP28 ports.", + "source_url": "https://store.ui.com/us/en/category/all-switching/products/usw-pro-xg-24-poe" + }, + "USW-Pro-XG-48-PoE": { + "vendor": "Ubiquiti", + "category": "switch", + "name": "Pro XG 48 PoE", + "sku": "USW-Pro-XG-48-PoE", + "unit_cost": 2499, + "ui_care_5yr_unit_cost": 499, + "poe_budget_watts": 1080, + "description": "48-port Layer 3 Etherlighting PoE+++ switch with 32 10 GbE, 16 2.5 GbE PoE, and 4 25G SFP28 ports.", + "source_url": "https://store.ui.com/us/en/category/all-switching/products/usw-pro-xg-48-poe" + }, + "USW-Aggregation": { + "vendor": "Ubiquiti", + "category": "aggregation", + "name": "Aggregation", + "sku": "USW-Aggregation", + "unit_cost": 269, + "ui_care_5yr_unit_cost": 59, + "description": "8-port Layer 2 switch made for 10G SFP+ connections.", + "source_url": "https://store.ui.com/us/en/products/usw-aggregation" + }, + "USW-Pro-Aggregation": { + "vendor": "Ubiquiti", + "category": "aggregation", + "name": "Hi-Capacity Aggregation", + "sku": "USW-Pro-Aggregation", + "unit_cost": 899, + "ui_care_5yr_unit_cost": 179, + "description": "32-port Layer 3 switch made for high-capacity 10G SFP+ and 25G SFP28 connections.", + "source_url": "https://store.ui.com/us/en/category/all-switching/products/usw-pro-aggregation" + }, + "USW-Pro-XG-Aggregation": { + "vendor": "Ubiquiti", + "category": "aggregation", + "name": "Pro XG Aggregation", + "sku": "USW-Pro-XG-Aggregation", + "unit_cost": 2499, + "ui_care_5yr_unit_cost": 499, + "description": "32-port Layer 3 Etherlighting switch for high-capacity 25G SFP28 connections.", + "source_url": "https://store.ui.com/us/en/products/usw-pro-xg-aggregation" + }, + "UDM-Pro-Max": { + "vendor": "Ubiquiti", + "category": "gateway", + "name": "Dream Machine Pro Max", + "sku": "UDM-Pro-Max", + "unit_cost": 599, + "ui_care_5yr_unit_cost": 119, + "cybersecure_annual_unit_cost": 99, + "description": "10G Cloud Gateway with 200+ UniFi device / 2,000+ client support, 5 Gbps IPS routing, and redundant NVR storage.", + "source_url": "https://store.ui.com/us/en/category/all-cloud-gateways/products/udm-pro-max" + }, + "UDM-Beast": { + "vendor": "Ubiquiti", + "category": "gateway", + "name": "Dream Machine Beast", + "sku": "UDM-Beast", + "unit_cost": 1499, + "ui_care_5yr_unit_cost": 299, + "cybersecure_annual_unit_cost": 99, + "description": "Hyperscale-class Cloud Gateway delivering 25 Gbps IPS/IDS, 7,500+ client capacity, and the full UniFi application platform in one system. USD planning price came from user-provided Ubiquiti Store detail; verify before quoting.", + "source_url": "https://eu.store.ui.com/eu/en/category/cloud-gateways-large-scale/products/udm-beast" + } + }, + "unifi_equivalents": { + "MX68": { + "product_key": "UDM-Pro-Max", + "rationale": "Small MX class; UDM Pro Max provides materially more client and IPS headroom than older branch MX appliances." + }, + "MX75": { + "product_key": "UDM-Pro-Max", + "rationale": "Branch MX replacement class with 5 Gbps IPS routing headroom." + }, + "MX85": { + "product_key": "UDM-Pro-Max", + "rationale": "Branch MX replacement class with 5 Gbps IPS routing headroom." + }, + "MX95": { + "product_key": "UDM-Pro-Max", + "rationale": "Branch/campus edge candidate when advanced Meraki SD-WAN features are not mandatory." + }, + "MX100": { + "product_key": "UDM-Pro-Max", + "rationale": "Older MX100 campus edge candidate; validate WAN throughput, VPN, and security feature parity before selecting." + }, + "MX105": { + "product_key": "UDM-Beast", + "rationale": "Higher-throughput campus edge candidate; validate HA, VPN, and security controls." + }, + "MX250": { + "product_key": "UDM-Beast", + "rationale": "Large MX class; use only as a planning placeholder until gateway throughput and feature requirements are validated." + }, + "MX450": { + "product_key": "UDM-Beast", + "rationale": "Large MX class; use only as a planning placeholder until gateway throughput and feature requirements are validated." + }, + "MS120-24": { + "product_key": "USW-Pro-24-POE", + "rationale": "24-port access switch replacement class with higher PoE budget than many legacy MS120 variants." + }, + "MS120-48": { + "product_key": "USW-Pro-48-POE", + "rationale": "48-port access switch replacement class with 600 W PoE budget." + }, + "MS125-24": { + "product_key": "USW-Pro-24-POE", + "rationale": "24-port access switch replacement class." + }, + "MS125-48": { + "product_key": "USW-Pro-48-POE", + "rationale": "48-port access switch replacement class." + }, + "MS130-24": { + "product_key": "USW-Pro-XG-24-PoE", + "rationale": "2.5/10 GbE access switch candidate for modern AP uplinks and higher PoE headroom." + }, + "MS130-48": { + "product_key": "USW-Pro-XG-48-PoE", + "rationale": "2.5/10 GbE access switch candidate for modern AP uplinks and higher PoE headroom." + }, + "MS210-24": { + "product_key": "USW-Pro-24-POE", + "rationale": "24-port Layer 3 access switch replacement class." + }, + "MS210-48": { + "product_key": "USW-Pro-48-POE", + "rationale": "48-port Layer 3 access switch replacement class." + }, + "MS220-24": { + "product_key": "USW-Pro-24-POE", + "rationale": "24-port access switch replacement class." + }, + "MS220-48": { + "product_key": "USW-Pro-48-POE", + "rationale": "48-port access switch replacement class." + }, + "MS225-24": { + "product_key": "USW-Pro-24-POE", + "rationale": "24-port access switch replacement class." + }, + "MS225-48": { + "product_key": "USW-Pro-48-POE", + "rationale": "48-port access switch replacement class." + }, + "MS250-24": { + "product_key": "USW-Pro-24-POE", + "rationale": "24-port Layer 3 access switch replacement class." + }, + "MS250-48": { + "product_key": "USW-Pro-48-POE", + "rationale": "48-port Layer 3 access switch replacement class." + }, + "MS350-24": { + "product_key": "USW-Pro-XG-24-PoE", + "rationale": "Higher-end 24-port switch candidate; validate access/core role before selecting." + }, + "MS350-48": { + "product_key": "USW-Pro-XG-48-PoE", + "rationale": "Higher-end 48-port switch candidate; validate access/core role before selecting." + }, + "MS390-24": { + "product_key": "USW-Pro-XG-24-PoE", + "rationale": "Higher-end 24-port switch candidate; validate access/core role before selecting." + }, + "MS390-48": { + "product_key": "USW-Pro-XG-48-PoE", + "rationale": "Higher-end 48-port switch candidate; validate access/core role before selecting." + }, + "C9300-24": { + "product_key": "USW-Pro-XG-24-PoE", + "rationale": "Catalyst access switch candidate where multigig and high PoE headroom are desired." + }, + "C9300-48": { + "product_key": "USW-Pro-XG-48-PoE", + "rationale": "Catalyst access switch candidate where multigig and high PoE headroom are desired." + }, + "MR24": { + "product_key": "U7-LR", + "rationale": "Legacy indoor AP refresh candidate; validate density and 6 GHz design during survey." + }, + "MR33": { + "product_key": "U7-LR", + "rationale": "Legacy indoor AP refresh candidate; validate density and 6 GHz design during survey." + }, + "MR34": { + "product_key": "U7-LR", + "rationale": "Legacy indoor AP refresh candidate; validate density and 6 GHz design during survey." + }, + "MR36": { + "product_key": "U7-LR", + "rationale": "Indoor AP refresh candidate; validate density and 6 GHz design during survey." + }, + "MR42": { + "product_key": "U7-LR", + "rationale": "Legacy indoor AP refresh candidate; validate density and 6 GHz design during survey." + }, + "MR44": { + "product_key": "U7-Pro", + "rationale": "Modern indoor AP replacement class with Wi-Fi 7 and 6 GHz support." + }, + "MR46": { + "product_key": "U7-Pro", + "rationale": "Modern indoor AP replacement class with Wi-Fi 7 and 6 GHz support." + }, + "MR52": { + "product_key": "U7-Pro", + "rationale": "High-performance indoor AP replacement class with Wi-Fi 7 and 6 GHz support." + }, + "MR53": { + "product_key": "U7-Pro", + "rationale": "High-performance indoor AP replacement class with Wi-Fi 7 and 6 GHz support." + }, + "MR56": { + "product_key": "U7-Pro", + "rationale": "High-performance indoor AP replacement class with Wi-Fi 7 and 6 GHz support." + }, + "CW9176I": { + "product_key": "U7-Pro", + "rationale": "Modern indoor AP comparison point; verify whether dual-persona Cisco Wi-Fi 7 hardware should remain Meraki-managed." + }, + "CW9178I": { + "product_key": "U7-Pro", + "rationale": "Modern indoor AP comparison point; verify whether dual-persona Cisco Wi-Fi 7 hardware should remain Meraki-managed." + } + }, + "models": { + "C9300-48UXM-E": { + "meraki_unit_cost": 1516.99, + "meraki_unit_source": "NetworkTigers (used)", + "pricing_note": "Used-market hardware reference only; excludes licensing, warranty, support, tax, freight, optics, and implementation." + }, + "C9200L-48P-4X-E": { + "meraki_unit_cost": 2645.99, + "meraki_unit_source": "NetworkTigers (used)", + "pricing_note": "Used-market hardware reference only; excludes licensing, warranty, support, tax, freight, optics, and implementation." + }, + "MR46": { + "meraki_unit_cost": 239.99, + "meraki_unit_source": "NetworkTigers (used)", + "pricing_note": "Used-market MR46-HW reference only; excludes Meraki licensing, warranty, support, tax, freight, and implementation." + } + } +} diff --git a/reporting/sections.py b/reporting/sections.py index abcbbf1..19adef0 100644 --- a/reporting/sections.py +++ b/reporting/sections.py @@ -25,6 +25,71 @@ ) from .topology import _build_topology_facts + +def _is_low_speed_link(speed: Any) -> bool: + text = str(speed or "").strip().lower() + return text.startswith("10 mb") or text.startswith("100 mb") + + +def _meaningful_port_messages(messages: Any) -> List[str]: + if isinstance(messages, str): + messages = [messages] + if not isinstance(messages, list): + return [] + benign_fragments = ( + "disconnected", + "not connected", + "no link", + "link down", + "down", + ) + result = [] + for message in messages: + text = str(message or "").strip() + if not text: + continue + lowered = text.lower() + if any(fragment in lowered for fragment in benign_fragments): + continue + result.append(text) + return result + + +def _model_cell(model: Any) -> str: + text = str(model or "").strip() + return f"{_he(text)}" if text else "Unknown model" + + +def _compact_text(value: Any, max_len: int = 18) -> str: + text = str(value or "").strip() + if not text: + return "" + if len(text) <= max_len: + return text + return text[: max(1, max_len - 1)].rstrip() + "…" + + +def _compact_vlan_text(value: Any) -> str: + text = str(value or "—").strip() + replacements = { + "Trunk": "T", + "Access": "A", + "native": "n", + "allowed": "allow", + "VLAN": "V", + } + for old, new in replacements.items(): + text = text.replace(old, new) + return _compact_text(text, 24) + + +def _compact_neighbor_text(port: Dict[str, Any], serial_to_dev: Dict[str, Dict[str, Any]]) -> str: + text = _describe_port_neighbor(port, serial_to_dev) + text = text.replace("downstream client(s)", "clients") + text = text.replace("No neighbor data", "—") + return _compact_text(text, 26) or "—" + + def _render_switch_port_grid( ports: List[Dict[str, Any]], port_configs: Optional[Dict[str, Dict[str, Any]]] = None, @@ -40,9 +105,7 @@ def _render_switch_port_grid( port_id = str(port.get("portId") or "?") status = str(port.get("status") or "").lower() speed = str(port.get("speed") or "") - errors = port.get("errors") or [] - if isinstance(errors, str): - errors = [errors] + errors = _meaningful_port_messages(port.get("errors") or []) role = _port_role_label(port, port_configs.get(port_id), serial_to_dev) if errors: cls = "issue" @@ -50,7 +113,7 @@ def _render_switch_port_grid( cls = "uplink" elif "disconnected" in status or "not connected" in status or not status: cls = "down" - elif speed.startswith("100 ") or speed.startswith("10 "): + elif _is_low_speed_link(speed): cls = "warn" elif (port.get("poe") or {}).get("isAllocated"): cls = "poe" @@ -109,7 +172,9 @@ def _build_switch_detail_section( switch_port_configs_by_switch: Dict[str, Any], poe_by_serial: Dict[str, Dict[str, Any]], port_issues_by_switch: Dict[str, List[Dict[str, Any]]], + hardware_catalog: Optional[Dict[str, Any]] = None, ) -> Tuple[str, List[Tuple[str, str]]]: + catalog_models = (hardware_catalog or {}).get("models") or {} switch_entries: List[Tuple[str, str, str, str]] = [] for net_data in sorted(devices_by_network.values(), key=lambda item: item["name"]): for dev in sorted( @@ -146,12 +211,66 @@ def _build_switch_detail_section( serial_to_dev, status_by_switch, parent_of, children_of, edge_counts = _build_topology_facts( all_devices, lldp_cdp, switch_port_statuses_by_switch ) + switches_with_port_status = sum( + 1 for _, serial, _, _ in switch_entries + if status_by_switch.get(serial) + ) + switches_with_lldp = sum( + 1 for _, serial, _, _ in switch_entries + if isinstance(lldp_cdp, dict) and lldp_cdp.get(serial) + ) + identity_rows = [] + for site_name, serial, switch_name, model in switch_entries: + switch = serial_to_dev.get(serial, {}) + ports = status_by_switch.get(serial, {}) + port_count = len(ports) + connected_ports = sum( + 1 for port in ports.values() + if str(port.get("status") or "").lower() == "connected" + ) + poe_data = poe_by_serial.get(serial, {}) + observed_watts = float(poe_data.get("avgWatts", 0) or 0) + reference = catalog_models.get(model) or {} + budget = reference.get("poeBudgetWatts") + headroom = "Unknown" + if isinstance(budget, (int, float)): + headroom = f"{max(0.0, float(budget) - observed_watts):.1f} W" + identity_rows.append( + "" + f"{_he(site_name)}" + f"{_he(switch_name)}
    {_he(serial)}" + f"{_he(model or '—')}" + f"{_he(str(switch.get('status') or 'unknown'))}" + f"{connected_ports} / {port_count if port_count else '—'}" + f"{_he(f'{budget} W' if isinstance(budget, (int, float)) else 'Unknown')}" + f"{observed_watts:.1f} W" + f"{_he(headroom)}" + f"{_he(str(reference.get('source') or 'Not in local catalog'))}" + "" + ) section_parts = [ - """ + f"""

    16. Switch Deep Dive

    Port-level views for each MS switch, including link status, negotiated speed, traffic, PoE draw, inferred connected device, and upstream/downstream placement in the switching tree.

    +
    +
    Source Data Coverage
    +
    + Switches discovered: {len(switch_entries)} · + Port telemetry available: {switches_with_port_status} · + LLDP/CDP neighbor data available: {switches_with_lldp}. + If this section appears sparse, regenerate backups with a full API collection and confirm the + Dashboard API key can read switch port statuses, switch port configs, and LLDP/CDP data. +
    +
    +

    Switch Identity & PoE Budget Reference

    + + + + + {''.join(identity_rows)} +
    SiteSwitchModelStatusPorts UpKnown PoE BudgetObserved PoE AvgBudget HeadroomReference
    """ ] @@ -183,6 +302,12 @@ def _build_switch_detail_section( issue_count = len(port_issues_by_switch.get(serial, [])) poe_data = poe_by_serial.get(serial, {}) poe_watts = float(poe_data.get("avgWatts", 0) or 0) + hardware_reference = catalog_models.get(model) or {} + poe_budget = hardware_reference.get("poeBudgetWatts") + poe_budget_text = f"{poe_budget} W" if isinstance(poe_budget, (int, float)) else "Unknown" + poe_headroom_text = "Unknown" + if isinstance(poe_budget, (int, float)): + poe_headroom_text = f"{max(0.0, float(poe_budget) - poe_watts):.1f} W" active_ports = sum(1 for port in ports if str(port.get("status") or "").lower() == "connected") uplink_ports = [port for port in ports if port.get("isUplink")] ranked_ports = sorted( @@ -208,26 +333,22 @@ def _build_switch_detail_section( port_config = port_configs.get(port_id) usage = port.get("usageInKb") or {} traffic = port.get("trafficInKbps") or {} - errors = port.get("errors") or [] - if isinstance(errors, str): - errors = [errors] - warnings = port.get("warnings") or [] - if isinstance(warnings, str): - warnings = [warnings] + errors = _meaningful_port_messages(port.get("errors") or []) + warnings = _meaningful_port_messages(port.get("warnings") or []) poe = port.get("poe") or {} power_wh = port.get("powerUsageInWh") indicators = [] if port.get("isUplink"): - indicators.append('Uplink') + indicators.append('U') if poe.get("isAllocated") or (isinstance(power_wh, (int, float)) and power_wh > 0): - indicators.append('PoE') + indicators.append('P') if errors: - indicators.append(f'{len(errors)} error(s)') + indicators.append(f'E{len(errors)}') elif warnings: - indicators.append(f'{len(warnings)} warning(s)') + indicators.append(f'W{len(warnings)}') speed = str(port.get("speed") or "—") - if speed.startswith("100 ") or speed.startswith("10 "): - indicators.append(f'{_he(speed)}') + if _is_low_speed_link(speed): + indicators.append(f'{_he(_speed_label(speed))}') role = _port_role_label(port, port_config, serial_to_dev) vlan_text = _describe_vlan_mode(port_config) port_name = "—" @@ -244,18 +365,18 @@ def _build_switch_detail_section( table_rows.append( "" f"{_he(port_id or '—')}" - f"{_he(port_name)}" - f"{_he(heat_label)} {heat_score:.0f}" - f"{_he(role)}" - f"{_he(str(port.get('status') or 'Unknown'))}" - f"{_he(speed)}" - f"{_he(str(port.get('duplex') or '—'))}" - f"{_he(vlan_text)}" + f"{_he(_compact_text(port_name, 16) or '—')}" + f"{_he(heat_label[:1])}{heat_score:.0f}" + f"{_he(_port_role_short(role))}" + f"{_he(_compact_text(str(port.get('status') or 'Unknown'), 9))}" + f"{_he(_speed_label(speed))}" + f"{_he(_compact_text(str(port.get('duplex') or '—'), 4))}" + f"{_he(_compact_vlan_text(vlan_text))}" f"{_format_usage_kb((usage or {}).get('total'))}" f"{_he(str((traffic or {}).get('total') or '—'))} Kbps" f"{_he(f'{float(power_wh):.1f} Wh' if isinstance(power_wh, (int, float)) else ('Allocated' if poe.get('isAllocated') else '—'))}" f"{''.join(indicators) or '—'}" - f"{_inline_md(_describe_port_neighbor(port, serial_to_dev))}" + f"{_inline_md(_compact_neighbor_text(port, serial_to_dev))}" "" ) @@ -271,6 +392,8 @@ def _build_switch_detail_section(
    Ports Up{active_ports} / {len(ports) or 0}
    Uplinks{_he(', '.join(str(port.get('portId')) for port in uplink_ports) if uplink_ports else 'None flagged')}
    PoE Avg{poe_watts:.1f} W
    +
    PoE Budget{_he(poe_budget_text)}
    +
    PoE Headroom{_he(poe_headroom_text)}
    Port Issues{issue_count}
    @@ -291,10 +414,15 @@ def _build_switch_detail_section(
    + + + + + - - + + {''.join(table_rows) if table_rows else ''} @@ -757,7 +885,11 @@ def _build_config_coverage_section( ("Wireless Settings", "wireless_settings.json"), ("Wireless SSIDs", "wireless_ssids.json"), ("Wireless RF Profiles", "wireless_rf_profiles.json"), + ("Network Clients", "network_clients.json"), ("Appliance Uplink Usage", "appliance_uplinks_usage.json"), + ("Appliance VLANs", "appliance_vlans.json"), + ("Appliance DHCP Subnets", "appliance_dhcp_subnets.json"), + ("Appliance Policy Backup", "appliance_policy_backup.json"), ("Security Baseline Summary", "security_baseline.json"), ("Licensing", "licensing.json"), ("Firmware Upgrades", "firmware_upgrades.json"), @@ -776,6 +908,21 @@ def _build_config_coverage_section( base = os.path.join(org_dir, "networks", net_id) def _has(name: str) -> str: return "Present" if os.path.exists(os.path.join(base, name)) else "Missing" + is_appliance_network = "appliance" in (net.get("productTypes") or []) + if not is_appliance_network: + network_rows.append( + [ + net_name, + "N/A", + "N/A", + "N/A", + "N/A", + "N/A", + "N/A", + _has("network_clients.json"), + ] + ) + continue network_rows.append( [ net_name, @@ -783,6 +930,9 @@ def _has(name: str) -> str: _has("appliance_port_forwarding_rules.json"), _has("appliance_intrusion.json"), _has("appliance_malware.json"), + _has("appliance_vlans.json"), + _has("appliance_policy_backup.json"), + _has("network_clients.json"), ] ) @@ -791,11 +941,361 @@ def _has(name: str) -> str:

    11. Configuration Backup Coverage

    This section documents which configuration artifacts are present in the current backup set. Missing items indicate API collection gaps or inaccessible product scopes that should be added before final audit sign-off.

    {render_section("Org-Wide Configuration Artifacts", [["Artifact", "Status"]] + org_rows if org_rows else [])} - {render_section("Per-Network Appliance Configuration", [["Network", "Firewall Settings", "Port Forwarding", "IDS/IPS", "AMP/Malware"]] + network_rows if network_rows else [])} + {render_section("Per-Network Appliance Configuration", [["Network", "Firewall Settings", "Port Forwarding", "IDS/IPS", "AMP/Malware", "VLANs/DHCP", "Policy Backup", "Client Detail"]] + network_rows if network_rows else [])} """ +def _policy_rules(payload: Any) -> List[Dict[str, Any]]: + if isinstance(payload, dict) and isinstance(payload.get("rules"), list): + return [rule for rule in payload.get("rules", []) if isinstance(rule, dict)] + if isinstance(payload, list): + return [rule for rule in payload if isinstance(rule, dict)] + return [] + + +def _policy_error(payload: Any) -> str: + if isinstance(payload, dict) and payload.get("error"): + return str(payload.get("error")) + return "" + + +def _content_filter_summary(payload: Any) -> Tuple[int, int, int, str]: + if not isinstance(payload, dict) or payload.get("error"): + return (0, 0, 0, "") + blocked = payload.get("blockedUrlCategories") or [] + allowed = payload.get("allowedUrlPatterns") or [] + blocked_patterns = payload.get("blockedUrlPatterns") or [] + url_categories = ", ".join( + str((item or {}).get("name") or item) + for item in blocked[:8] + ) + if len(blocked) > 8: + url_categories += f" +{len(blocked) - 8} more" + return ( + len(blocked) if isinstance(blocked, list) else 0, + len(allowed) if isinstance(allowed, list) else 0, + len(blocked_patterns) if isinstance(blocked_patterns, list) else 0, + url_categories, + ) + + +def _build_appliance_policy_section( + networks: List[Dict[str, Any]], + appliance_policy_backup: Dict[str, Any], +) -> str: + network_names = { + n.get("id"): n.get("name") or n.get("id") + for n in networks + if isinstance(n, dict) and n.get("id") + } + if not appliance_policy_backup: + return """ +

    MX Firewall, Filtering & Policy Backup

    +
    +
    + No MX firewall/content-filtering policy backup was present in this backup. Re-run collection + with appliance_policy_backup.json enabled to print L3/L7 firewall rules, + inbound rules, NAT, content filtering, traffic shaping, VPN, group policies, and syslog. +
    +
    + """ + + summary_rows = [] + rule_rows = [] + error_rows = [] + l3_total = l7_total = inbound_total = nat_total = forwarding_total = 0 + content_category_total = 0 + content_allow_total = 0 + content_block_total = 0 + rule_limit = 80 + displayed = 0 + + def _add_rule_row(net_name: str, family: str, rule: Dict[str, Any]) -> None: + nonlocal displayed + if displayed >= rule_limit: + return + displayed += 1 + source = rule.get("srcCidr") or rule.get("srcPort") or rule.get("allowedIps") or "Any" + destination = ( + rule.get("destCidr") + or rule.get("destPort") + or rule.get("lanIp") + or rule.get("value") + or rule.get("publicIp") + or "Any" + ) + ports = rule.get("destPort") or rule.get("publicPort") or rule.get("localPort") or rule.get("port") or "Any" + rule_rows.append( + "" + f"" + f"" + f"" + f"" + f"" + f"" + f"" + "" + ) + + for net_id, payload in sorted(appliance_policy_backup.items(), key=lambda item: network_names.get(item[0], item[0])): + net_name = network_names.get(net_id, net_id) + if isinstance(payload, dict) and payload.get("error"): + error_rows.append( + f"" + ) + continue + if not isinstance(payload, dict): + continue + + l3_rules = _policy_rules(payload.get("l3FirewallRules")) + l7_rules = _policy_rules(payload.get("l7FirewallRules")) + inbound_rules = _policy_rules(payload.get("inboundFirewallRules")) + port_forwarding = _policy_rules(payload.get("portForwardingRules")) + nat_1_1 = _policy_rules(payload.get("oneToOneNatRules")) + nat_1_many = _policy_rules(payload.get("oneToManyNatRules")) + group_policies = payload.get("groupPolicies") if isinstance(payload.get("groupPolicies"), list) else [] + syslog_servers = payload.get("syslogServers", {}) + syslog_count = len(syslog_servers.get("servers") or []) if isinstance(syslog_servers, dict) else 0 + vpn = payload.get("siteToSiteVpn") if isinstance(payload.get("siteToSiteVpn"), dict) else {} + vpn_mode = vpn.get("mode") or "not captured" + cats, allows, blocks, cat_names = _content_filter_summary(payload.get("contentFiltering")) + + l3_total += len(l3_rules) + l7_total += len(l7_rules) + inbound_total += len(inbound_rules) + forwarding_total += len(port_forwarding) + nat_total += len(nat_1_1) + len(nat_1_many) + content_category_total += cats + content_allow_total += allows + content_block_total += blocks + + summary_rows.append( + "" + f"" + f"" + f"" + f"" + f"" + f"" + f"" + f"" + f"" + f"" + "" + ) + + for family, rules in ( + ("L3", l3_rules), + ("L7", l7_rules), + ("Inbound", inbound_rules), + ("Port Forward", port_forwarding), + ("1:1 NAT", nat_1_1), + ("1:Many NAT", nat_1_many), + ): + for rule in rules: + _add_rule_row(net_name, family, rule) + if cat_names: + rule_rows.append( + "" + f"" + f"" + "" + ) + + for key, item in payload.items(): + err = _policy_error(item) + if err: + error_rows.append( + "" + f"" + f"" + "" + ) + + omitted_note = "" + if displayed >= rule_limit: + omitted_note = ( + f"

    Rule table capped at {rule_limit} rows for report readability. " + "The JSON backup contains the full policy export.

    " + ) + + return f""" +

    MX Firewall, Filtering & Policy Backup

    +
    +
    Policy Collection Summary
    +
    + L3 rules: {l3_total}. + L7 rules: {l7_total}. + Inbound rules: {inbound_total}. + Port forwards: {forwarding_total}. + NAT mappings: {nat_total}. + Content filter customizations: {content_category_total} blocked categories, + {content_allow_total} allowed URL patterns, + {content_block_total} blocked URL patterns. +
    +
    +

    Policy Backup by Network

    +
    PortPort LabelHeatRoleStatusSpeedDuplexVLAN / ModeTotal DataCurrent ThroughputPowerIndicatorsConnected DevicePortLabelHeatRoleStatSpdDupVLANDataKbpsPwrFlgNeighbor
    No switch port status data available.
    {_he(net_name)}{_he(family)}{_he(rule.get('policy') or rule.get('protocol') or rule.get('type') or 'Rule')}{_he(source)}{_he(destination)}{_he(ports)}{_he(rule.get('comment') or rule.get('name') or '—')}
    {_he(net_name)}{_he(str(payload.get('error'))[:180])}
    {_he(net_name)}{len(l3_rules)}{len(l7_rules)}{len(inbound_rules)}{len(port_forwarding)}{len(nat_1_1) + len(nat_1_many)}{cats} cat / {allows} allow / {blocks} block{len(group_policies)}{_he(str(vpn_mode))}{syslog_count}
    {_he(net_name)}Content FilterBlocked Categories{_he(cat_names)}
    {_he(net_name)}{_he(key)}{_he(err[:180])}
    + + + + {''.join(summary_rows) if summary_rows else ''} +
    NetworkL3L7InboundFwdNATContent FilteringGroupsVPNSyslog
    No MX policy records were present.
    +

    Printable Firewall & NAT Rule Snapshot

    + + + + + {''.join(rule_rows + error_rows) if (rule_rows or error_rows) else ''} +
    NetworkPolicyAction / TypeSourceDestinationPortsComment / Name
    No firewall, NAT, or content-filtering rows were present.
    + {omitted_note} + """ + + +def _build_addressing_dhcp_section( + networks: List[Dict[str, Any]], + appliance_vlans_by_network: Dict[str, Any], + appliance_dhcp_subnets_by_serial: Dict[str, Any], + client_records: List[Dict[str, Any]], + devices: List[Dict[str, Any]], +) -> str: + network_names = { + n.get("id"): n.get("name") or n.get("id") + for n in networks + if isinstance(n, dict) and n.get("id") + } + appliance_names = { + d.get("serial"): d.get("name") or d.get("model") or d.get("serial") + for d in devices + if isinstance(d, dict) and d.get("serial") and d.get("productType") == "appliance" + } + + client_counts: Dict[Tuple[str, str], int] = {} + for client in client_records: + if not isinstance(client, dict): + continue + net_id = client.get("networkId") or (client.get("network") or {}).get("id") + vlan = str(client.get("vlan") or client.get("vlanId") or client.get("namedVlan") or "—") + if net_id: + client_counts[(str(net_id), vlan)] = client_counts.get((str(net_id), vlan), 0) + 1 + + vlan_rows = [] + vlan_total = 0 + dhcp_enabled = 0 + relay_count = 0 + for net_id, vlans in appliance_vlans_by_network.items() if isinstance(appliance_vlans_by_network, dict) else []: + if isinstance(vlans, dict) and vlans.get("error"): + vlan_rows.append( + "" + f"{_he(network_names.get(net_id, net_id))}" + "" + f"{_he(str(vlans.get('error'))[:180])}" + "" + ) + continue + if not isinstance(vlans, list): + continue + for vlan in sorted(vlans, key=lambda item: str(item.get("id") or item.get("name") or "")): + if not isinstance(vlan, dict): + continue + vlan_total += 1 + handling = str(vlan.get("dhcpHandling") or "Unknown") + if "run a dhcp server" in handling.lower(): + dhcp_enabled += 1 + if "relay" in handling.lower(): + relay_count += 1 + vlan_id = str(vlan.get("id") or "—") + clients = client_counts.get((str(net_id), vlan_id), 0) + vlan_rows.append( + "" + f"{_he(network_names.get(net_id, net_id))}" + f"{_he(vlan_id)}" + f"{_he(vlan.get('name') or '—')}" + f"{_he(vlan.get('subnet') or vlan.get('cidr') or '—')}" + f"{_he(vlan.get('applianceIp') or '—')}" + f"{_he(handling)}" + f"{_he(vlan.get('dhcpLeaseTime') or '—')}" + f"{_he(', '.join(str(ip) for ip in vlan.get('dhcpRelayServerIps') or []) or '—')}" + f"{clients}" + "" + ) + + dhcp_rows = [] + constrained = 0 + for serial, subnets in appliance_dhcp_subnets_by_serial.items() if isinstance(appliance_dhcp_subnets_by_serial, dict) else []: + if isinstance(subnets, dict) and subnets.get("error"): + dhcp_rows.append( + "" + f"{_he(appliance_names.get(serial, serial))}
    {_he(serial)}" + "" + f"{_he(str(subnets.get('error'))[:180])}" + "" + ) + continue + if not isinstance(subnets, list): + continue + for subnet in sorted(subnets, key=lambda item: str(item.get("subnet") or "")): + if not isinstance(subnet, dict): + continue + used = int(subnet.get("usedCount") or 0) + free = int(subnet.get("freeCount") or 0) + total = used + free + pct = (used / total * 100) if total else 0.0 + if total and pct >= 80: + constrained += 1 + cls = "badge-fail" if pct >= 90 else "badge-warn" if pct >= 80 else "badge-ok" + dhcp_rows.append( + "" + f"{_he(appliance_names.get(serial, serial))}
    {_he(serial)}" + f"{_he(str(subnet.get('vlanId') or '—'))}" + f"{_he(subnet.get('subnet') or '—')}" + f"{used}" + f"{free}" + f"{pct:.1f}% used" + "" + ) + + if not vlan_rows and not dhcp_rows: + return """ +

    Addressing & DHCP Scope Audit

    +
    +
    + No MX VLAN or DHCP subnet telemetry was present in this backup. Re-run collection with + appliance_vlans.json and appliance_dhcp_subnets.json enabled + to populate subnet, gateway, DHCP handling, relay, lease-time, and pool-utilization data. +
    +
    + """ + + return f""" +

    Addressing & DHCP Scope Audit

    +
    +
    Addressing Collection Summary
    +
    + MX VLAN definitions observed: {vlan_total}. + DHCP server VLANs: {dhcp_enabled}. + DHCP relay VLANs: {relay_count}. + DHCP scopes at or above 80% utilization: {constrained}. +
    +
    +

    MX VLAN Interfaces

    + + + + + {''.join(vlan_rows) if vlan_rows else ''} +
    NetworkVLANNameSubnetGatewayDHCP ModeLeaseRelay ServersClients Seen
    No MX VLAN interface definitions were present.
    +

    DHCP Pool Utilization

    + + + + + {''.join(dhcp_rows) if dhcp_rows else ''} +
    ApplianceVLANSubnetUsedFreeUtilization
    No DHCP pool utilization records were present.
    + """ + + def _build_budget_forecast_section( inventory_summary: Dict[str, Any], pricing_payload: Dict[str, Any], diff --git a/reporting/topology.py b/reporting/topology.py index e9c412b..2e4f252 100644 --- a/reporting/topology.py +++ b/reporting/topology.py @@ -281,25 +281,38 @@ def _assign(serial: str, val: int) -> None: sw_serials = {s for s, d in s2d.items() if d.get("productType") == "switch"} tier2 = [s for s in sw_serials if parent_of.get(s) not in sw_serials] - pages: List[Dict[str, str]] = [] - - # Overview: MX + all tier-2 switches only - overview_devs = [d for d in devices - if d.get("productType") == "appliance" - or d["serial"] in tier2] - pages.append({"title": "Overview — Core / Distribution Layer", - "svg": _topo_svg( - overview_devs, lldp_cdp, ap_util, port_issues, - switch_port_statuses_by_switch, enrichment=enrichment, - )}) - - # Per-branch detail: tier-2 switch + its full subtree + parent MX stub type_order = {"appliance": 0, "switch": 1, "wireless": 2, "camera": 3, "sensor": 4} tier2_sorted = sorted(tier2, key=lambda s: ( type_order.get(s2d[s].get("productType", ""), 9), s2d[s].get("name") or s, )) + pages: List[Dict[str, str]] = [] + + # Overview: MX + readable chunks of tier-2 switches only. WPC-style + # campuses can have enough distribution switches that a single row becomes + # illegible after PDF scaling. + overview_chunk_size = 6 + overview_chunks = [ + tier2_sorted[i:i + overview_chunk_size] + for i in range(0, len(tier2_sorted), overview_chunk_size) + ] or [[]] + for idx, chunk in enumerate(overview_chunks, start=1): + overview_devs = [ + d for d in devices + if d.get("productType") == "appliance" or d.get("serial") in chunk + ] + suffix = f" ({idx}/{len(overview_chunks)})" if len(overview_chunks) > 1 else "" + pages.append({ + "title": f"Overview — Core / Distribution Layer{suffix}", + "svg": _topo_svg( + overview_devs, lldp_cdp, ap_util, port_issues, + switch_port_statuses_by_switch, enrichment=enrichment, + infer_root_parent=True, + ), + }) + + # Per-branch detail: tier-2 switch + its full subtree. for t2 in tier2_sorted: subtree: set = set() q: deque = deque([t2]) @@ -333,6 +346,7 @@ def _topo_svg( switch_port_statuses_by_switch: Dict[str, Any], enrichment: Optional[Dict] = None, show_internet: bool = True, + infer_root_parent: bool = False, ) -> str: """Return an inline SVG topology using parent/child relationships from ports.""" if not devices: @@ -483,6 +497,26 @@ def _upstream_rank(serial: str) -> int: parent_link_of[child] = link children.setdefault(parent, []).append(child) + appliances = [ + serial for serial, dev in serial_to_dev.items() + if dev.get("productType") == "appliance" + ] + if infer_root_parent and len(appliances) == 1: + appliance = appliances[0] + for serial, dev in serial_to_dev.items(): + if serial == appliance or dev.get("productType") != "switch" or serial in parent_of: + continue + parent_of[serial] = appliance + parent_link_of[serial] = { + "local": serial, + "remote": appliance, + "local_port": "", + "remote_port": "", + "local_speed": "", + "confirmed": False, + } + children.setdefault(appliance, []).append(serial) + roots = [ serial for serial, dev in serial_to_dev.items() if dev.get("productType") == "appliance" and serial not in parent_of @@ -524,11 +558,23 @@ def _assign_depth(serial: str, value: int) -> None: if serial not in display_serials: continue by_depth.setdefault(value, []).append(serial) - for serials in by_depth.values(): - serials.sort(key=lambda serial: ( - type_order.get(serial_to_dev[serial].get("productType", ""), 9), - serial_to_dev[serial].get("name") or serial, - )) + previous_positions: Dict[str, int] = {} + for value in sorted(by_depth): + serials = by_depth[value] + if previous_positions: + serials.sort( + key=lambda serial: ( + previous_positions.get(parent_of.get(serial, ""), 9999), + type_order.get(serial_to_dev[serial].get("productType", ""), 9), + serial_to_dev[serial].get("name") or serial, + ) + ) + else: + serials.sort(key=lambda serial: ( + type_order.get(serial_to_dev[serial].get("productType", ""), 9), + serial_to_dev[serial].get("name") or serial, + )) + previous_positions = {serial: idx for idx, serial in enumerate(serials)} layers: List[List[str]] = ([["__internet__"]] if show_internet else []) for value in sorted(by_depth): @@ -592,9 +638,11 @@ def _ntype(serial: str) -> str: speed = _svg_esc(link.get("local_speed") or "") local_port = _svg_esc(link.get("local_port") or "") remote_port = _svg_esc(link.get("remote_port") or "") + dash = "" if link.get("confirmed", True) else ' stroke-dasharray="5 4"' + opacity = "0.95" if link.get("confirmed", True) else "0.65" parts.append( f'' + f'stroke="#8a9269" stroke-width="1.8" opacity="{opacity}"{dash}/>' ) if speed or local_port or remote_port: mid_y = (p_bot + cy) / 2 @@ -882,4 +930,3 @@ def _norm(value: Any) -> str: edge_counts[local_serial] = edge_counts.get(local_serial, 0) + 1 return serial_to_dev, status_by_switch, parent_of, children_of, edge_counts - diff --git a/run.sh b/run.sh index 18fb84c..974d998 100755 --- a/run.sh +++ b/run.sh @@ -12,6 +12,10 @@ usage() { echo " --report-only Skip all data collection; build reports from existing backups/" echo " --no-query Skip API query + backup stages; use data already in backups/" echo " --demo-report Build a report from sanitized test fixtures" + echo " --reports-dir " + echo " Write generated reports outside backups/ (default: reports)" + echo " --keep-html Keep generated HTML alongside PDFs in reports/" + echo " --fixed-now Use a fixed ISO timestamp for deterministic reports" echo " --no-ai-review Skip the Ollama review stage" echo " --health-check Validate local environment and exit" echo " --no-open Do not open generated reports after a successful run" @@ -22,7 +26,9 @@ usage() { echo " ./run.sh --model gemma4:e2b" echo " ./run.sh --no-query # re-generate reports from last backup" echo " ./run.sh --report-only --no-ai-review" + echo " ./run.sh --report-only --reports-dir reports --no-ai-review" echo " ./run.sh --demo-report --no-open" + echo " ./run.sh --demo-report --fixed-now 2026-05-02T21:30:00 --no-open" } validate_environment() { @@ -146,10 +152,19 @@ NO_QUERY=0 NO_OPEN=0 HEALTH_CHECK=0 DEMO_REPORT=0 +FIXED_NOW="" +REPORTS_DIR="reports" +KEEP_HTML=0 while [[ $# -gt 0 ]]; do case "$1" in --model|-m) CUSTOM_MODEL="${2:-}" + if [[ -z "$CUSTOM_MODEL" || "$CUSTOM_MODEL" == --* ]]; then + echo "Missing value for $1" >&2 + echo "" >&2 + usage >&2 + exit 2 + fi shift 2 ;; --report-only) @@ -172,6 +187,30 @@ while [[ $# -gt 0 ]]; do DEMO_REPORT=1 shift ;; + --reports-dir) + REPORTS_DIR="${2:-}" + if [[ -z "$REPORTS_DIR" || "$REPORTS_DIR" == --* ]]; then + echo "Missing value for $1" >&2 + echo "" >&2 + usage >&2 + exit 2 + fi + shift 2 + ;; + --keep-html) + KEEP_HTML=1 + shift + ;; + --fixed-now) + FIXED_NOW="${2:-}" + if [[ -z "$FIXED_NOW" || "$FIXED_NOW" == --* ]]; then + echo "Missing value for $1" >&2 + echo "" >&2 + usage >&2 + exit 2 + fi + shift 2 + ;; --health-check) HEALTH_CHECK=1 shift @@ -203,6 +242,19 @@ if [[ -z "${PYTHON_BIN:-}" ]]; then fi fi +if [[ -n "$FIXED_NOW" ]]; then + if ! "$PYTHON_BIN" - "$FIXED_NOW" <<'PY' >/dev/null 2>&1 +from datetime import datetime +import sys +datetime.fromisoformat(sys.argv[1].replace("Z", "+00:00")) +PY + then + echo "Invalid value for --fixed-now: must be an ISO timestamp, e.g. 2026-05-02T21:30:00" >&2 + exit 2 + fi + export MERAKI_REPORT_FIXED_NOW="$FIXED_NOW" +fi + if (( HEALTH_CHECK == 1 )); then HEALTH_ARGS=() if (( REPORT_ONLY == 1 || NO_QUERY == 1 )); then @@ -214,10 +266,15 @@ fi if (( DEMO_REPORT == 1 )); then DEMO_OUTPUT="backups/.demo/Fixture_Demo_Org" + DEMO_ARGS=() + if [[ -n "$FIXED_NOW" ]]; then + DEMO_ARGS+=("--fixed-now" "$FIXED_NOW") + fi "$PYTHON_BIN" -m reporting \ --source-dir tests/fixtures \ --org-name "Fixture Demo Org" \ - --output-dir "$DEMO_OUTPUT" + --output-dir "$DEMO_OUTPUT" \ + "${DEMO_ARGS[@]+"${DEMO_ARGS[@]}"}" demo_status=$? if (( NO_OPEN == 0 )); then demo_report=$(find "$DEMO_OUTPUT" -maxdepth 1 -type f -name '*_Complete_Report_*.pdf' | sort | tail -n 1) @@ -268,7 +325,11 @@ _hr() { print_header() { local now model_line mode_line now=$(date '+%A, %-d %B %Y %H:%M') - model_line="AI model: ${OLLAMA_MODEL:-gemma4:e2b (default)}" + if (( NO_AI_REVIEW == 1 )); then + model_line="AI review: disabled" + else + model_line="AI model: ${OLLAMA_MODEL:-gemma4:e2b (default)}" + fi if (( REPORT_ONLY == 1 )); then mode_line="Mode: report-only (skipping data collection)" elif (( NO_QUERY == 1 )); then mode_line="Mode: no-query (using existing backup data)" else mode_line="Mode: full run (fetching fresh API data)" @@ -427,6 +488,12 @@ run_stage() { if [[ "$script" == "meraki_backup.py" ]]; then extra_args+=("--force-refresh") # always fetch fresh — use --no-query to skip entirely fi + if [[ "$script" == "report_generator.py" ]]; then + extra_args+=("--reports-dir" "$REPORTS_DIR") + if (( KEEP_HTML == 0 )); then + extra_args+=("--pdf-only") + fi + fi "$PYTHON_BIN" "$script" "${extra_args[@]+"${extra_args[@]}"}" > "$tmp" 2>&1 local exit_code=$? @@ -575,15 +642,20 @@ echo -e "${BLU}╰$(_hr 62 ─ | tr -d '\n')╯${R}" echo "" if (( FAIL_COUNT == 0 )); then - echo -e " ${GRN}${BOLD}All stages passed.${R} Reports written to backups/." + echo -e " ${GRN}${BOLD}All stages passed.${R} Reports written to ${REPORTS_DIR}/." echo "" # ── Auto-open generated reports ─────────────────────────────────────────── + REPORT_OUTPUT_DIR="$(pwd)/$REPORTS_DIR/latest" BACKUPS_DIR="$(pwd)/backups" if (( NO_OPEN == 1 )); then echo -e " ${DIM2}Auto-open skipped by --no-open.${R}" - elif [[ -d "$BACKUPS_DIR" ]]; then + elif [[ -d "$REPORT_OUTPUT_DIR" || -d "$BACKUPS_DIR" ]]; then REPORT_FILES=() + SEARCH_DIR="$REPORT_OUTPUT_DIR" + if [[ ! -d "$SEARCH_DIR" ]]; then + SEARCH_DIR="$BACKUPS_DIR" + fi while IFS= read -r org_dir; do named_report=$(find "$org_dir" -maxdepth 1 -type f -name '*_Complete_Report_*.pdf' | sort | tail -n 1) named_html=$(find "$org_dir" -maxdepth 1 -type f -name '*_Complete_Report_*.html' | sort | tail -n 1) @@ -596,7 +668,7 @@ if (( FAIL_COUNT == 0 )); then elif [[ -f "$org_dir/report.html" ]]; then REPORT_FILES+=("$org_dir/report.html") fi - done < <(find "$BACKUPS_DIR" -mindepth 1 -maxdepth 1 -type d | sort) + done < <(find "$SEARCH_DIR" -mindepth 1 -maxdepth 1 -type d | sort) if (( ${#REPORT_FILES[@]} > 0 )); then echo -e " ${OLV}Opening ${#REPORT_FILES[@]} report(s)…${R}" @@ -618,7 +690,7 @@ if (( FAIL_COUNT == 0 )); then _open_file "$f" done else - echo -e " ${DIM2}No report files found in backups/ — run the pipeline first.${R}" + echo -e " ${DIM2}No report files found in ${REPORTS_DIR}/latest or backups/ — run the pipeline first.${R}" fi fi else diff --git a/tests/test_backup.py b/tests/test_backup.py index 1cc3a20..78b758f 100644 --- a/tests/test_backup.py +++ b/tests/test_backup.py @@ -97,6 +97,21 @@ def test_pipeline_version_is_string(self): assert isinstance(mb.PIPELINE_VERSION, str) +class TestClientSummaries: + def test_ap_client_summary_ignores_wired_clients(self): + summary = mb.summarize_ap_clients( + { + "N_1": [ + {"recentDeviceConnection": "Wireless", "recentDeviceSerial": "AP1"}, + {"recentDeviceConnection": "Wireless", "recentDeviceSerial": "AP1"}, + {"recentDeviceConnection": "Wired", "recentDeviceSerial": "SW1"}, + ] + } + ) + + assert summary["ap_client_counts"] == [("AP1", 2)] + + class TestPagedGetRateLimit: def test_retry_after_header_is_honored(self, monkeypatch): sleeps = [] @@ -217,3 +232,33 @@ def test_top_models_present(self): result = mb.summarize_inventory(inventory) models = [m[0] for m in result.get("top_models", [])] assert "MS225" in models + + +class TestRecommendSwitchPorts: + def test_disconnected_access_port_messages_are_not_findings(self): + result = mb.recommend_switch_ports( + { + "SW1": [ + { + "portId": "1", + "status": "Disconnected", + "isUplink": False, + "errors": ["Port disconnected", "No link detected"], + "warnings": ["Link down"], + }, + { + "portId": "2", + "status": "Connected", + "isUplink": False, + "errors": ["CRC errors detected"], + "warnings": [], + }, + ] + }, + {"SW1": [{"portId": "1", "enabled": True}, {"portId": "2", "enabled": True}]}, + ) + + findings = result["switch_port_findings"] + assert len(findings) == 1 + assert findings[0]["portId"] == "2" + assert findings[0]["detail"] == "CRC errors detected" diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py index 3a109a5..8634baa 100644 --- a/tests/test_pipeline.py +++ b/tests/test_pipeline.py @@ -111,6 +111,29 @@ def test_main_writes_ai_review_output(self, monkeypatch, tmp_path): assert "# AI-Enhanced Network Recommendations" in out assert "## Review" in out + def test_main_accepts_model_and_chunk_size_args(self, monkeypatch, tmp_path): + master = tmp_path / "master_recommendations.md" + master.write_text("recommendation body", encoding="utf-8") + monkeypatch.setattr(orv, "BACKUPS_DIR", str(tmp_path)) + monkeypatch.setattr(orv, "MODEL", "original-model") + monkeypatch.setattr(orv, "MAX_INPUT_CHARS", 50_000) + monkeypatch.setattr(orv, "ollama_available", lambda: True) + monkeypatch.setattr(orv, "review_content", lambda content: f"{orv.MODEL}:{orv.MAX_INPUT_CHARS}") + + assert orv.main(["--model", "test-model:1b", "--max-input-chars", "1234"]) == 0 + out = (tmp_path / "recommendations_ai_enhanced.md").read_text(encoding="utf-8") + assert "_Model: test-model:1b" in out + assert "test-model:1b:1234" in out + + def test_main_rejects_invalid_chunk_size_arg(self, capsys): + assert orv.main(["--max-input-chars", "0"]) == 2 + assert "max_input_chars must be greater than zero" in capsys.readouterr().err + + def test_main_reports_invalid_env_chunk_size_without_import_failure(self, monkeypatch, capsys): + monkeypatch.setattr(orv, "CONFIG_ERRORS", ["OLLAMA_MAX_INPUT_CHARS must be an integer"]) + assert orv.main([]) == 2 + assert "OLLAMA_MAX_INPUT_CHARS must be an integer" in capsys.readouterr().err + class TestRunShSmoke: def test_help_exits_zero(self): @@ -124,6 +147,9 @@ def test_help_exits_zero(self): assert result.returncode == 0 assert "Usage: ./run.sh [options]" in result.stdout assert "--demo-report" in result.stdout + assert "--fixed-now" in result.stdout + assert "--reports-dir" in result.stdout + assert "--keep-html" in result.stdout def test_unknown_flag_exits_two(self): result = subprocess.run( @@ -136,6 +162,67 @@ def test_unknown_flag_exits_two(self): assert result.returncode == 2 assert "Unknown option" in result.stderr + def test_model_flag_requires_value(self): + result = subprocess.run( + ["bash", str(PROJECT_ROOT / "run.sh"), "--model"], + cwd=PROJECT_ROOT, + capture_output=True, + text=True, + timeout=10, + ) + assert result.returncode == 2 + assert "Missing value for --model" in result.stderr + + def test_fixed_now_flag_requires_value(self): + result = subprocess.run( + ["bash", str(PROJECT_ROOT / "run.sh"), "--fixed-now"], + cwd=PROJECT_ROOT, + capture_output=True, + text=True, + timeout=10, + ) + assert result.returncode == 2 + assert "Missing value for --fixed-now" in result.stderr + + def test_reports_dir_flag_requires_value(self): + result = subprocess.run( + ["bash", str(PROJECT_ROOT / "run.sh"), "--reports-dir"], + cwd=PROJECT_ROOT, + capture_output=True, + text=True, + timeout=10, + ) + assert result.returncode == 2 + assert "Missing value for --reports-dir" in result.stderr + + def test_fixed_now_rejects_invalid_value(self): + result = subprocess.run( + ["bash", str(PROJECT_ROOT / "run.sh"), "--fixed-now", "not-a-date", "--health-check"], + cwd=PROJECT_ROOT, + capture_output=True, + text=True, + timeout=10, + ) + assert result.returncode == 2 + assert "Invalid value for --fixed-now" in result.stderr + + def test_demo_report_accepts_fixed_now(self): + demo_output = PROJECT_ROOT / "backups" / ".demo" / "Fixture_Demo_Org" + result = subprocess.run( + [ + "bash", str(PROJECT_ROOT / "run.sh"), + "--demo-report", + "--fixed-now", "2026-05-02T21:30:00", + "--no-open", + ], + cwd=PROJECT_ROOT, + capture_output=True, + text=True, + timeout=20, + ) + assert result.returncode == 0 + assert (demo_output / "Fixture_Demo_Org_Complete_Report_2026-05-02.html").exists() + def test_health_check_flag_exits_zero_for_report_only(self): result = subprocess.run( ["bash", str(PROJECT_ROOT / "run.sh"), "--report-only", "--health-check"], @@ -181,6 +268,51 @@ def fake_write_pdf(html_path, pdf_path): monkeypatch.setattr(app, "write_pdf", fake_write_pdf) - assert app.main(["--source-dir", str(source), "--output-dir", str(output)]) == 0 - assert any(output.glob("Demo_Org_Complete_Report_*.pdf")) + assert app.main([ + "--source-dir", str(source), + "--output-dir", str(output), + "--fixed-now", "2026-05-02T21:30:00", + ]) == 0 + assert (output / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() + assert (output / "Demo_Org_2026-05-02_2130_report.pdf").exists() assert (output / "report.pdf").exists() + + def test_reports_dir_writes_run_and_latest_without_html_when_pdf_only(self, monkeypatch, tmp_path): + from reporting import app + + source = tmp_path / "source" + reports = tmp_path / "reports" + source.mkdir() + (source / "recommendations.md").write_text("# Meraki Recommendations: Demo Org\n", encoding="utf-8") + + monkeypatch.setattr(app, "build_org_report", lambda org_dir, org_name, report_kind="full": f"

    {report_kind}

    ") + + def fake_write_pdf(html_path, pdf_path): + Path(pdf_path).write_text("pdf", encoding="utf-8") + return True + + monkeypatch.setattr(app, "write_pdf", fake_write_pdf) + + assert app.main([ + "--source-dir", str(source), + "--reports-dir", str(reports), + "--pdf-only", + "--fixed-now", "2026-05-02T21:30:00", + ]) == 0 + + run_dir = reports / "Demo_Org" / "2026-05-02_2130" + latest_dir = reports / "latest" / "Demo_Org" + assert (run_dir / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() + assert (latest_dir / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() + assert (latest_dir / "report.pdf").exists() + assert not (run_dir / "report.pdf").exists() + assert not (run_dir / "Demo_Org_2026-05-02_2130_report.pdf").exists() + assert not list(run_dir.glob("*.html")) + assert not list(latest_dir.glob("*.html")) + + def test_fixed_now_rejects_invalid_timestamp(self): + from reporting import app + + with pytest.raises(SystemExit) as exc: + app.main(["--fixed-now", "not-a-date"]) + assert exc.value.code == 2 diff --git a/tests/test_report.py b/tests/test_report.py index 81bfa20..d3fb74b 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -1,5 +1,7 @@ """Tests for reporting/app.py — build_org_report() with fixture data.""" +import json import os +import shutil import sys from datetime import datetime import pytest @@ -29,6 +31,33 @@ def test_no_unclosed_section_tags(self, report_html): def test_exec_summary_present(self, report_html): assert "executive-summary" in report_html + def test_toc_entries_link_to_report_sections(self, report_html): + assert 'class="toc-link" href="#executive-summary"' in report_html + assert 'class="toc-link" href="#network-overview"' in report_html + assert 'class="toc-link" href="#config-coverage"' in report_html + assert 'class="toc-link" href="#switch-deep-dive"' in report_html + assert 'class="toc-link" href="#unifi-comparison"' in report_html + assert 'class="toc-link" href="#vlan-reference"' in report_html + + def test_toc_css_uses_denser_spacing(self, report_html): + from reporting.html_shell import build_html + html = build_html("Fixture", report_html) + assert "padding: 44px 64px;" in html + assert "padding: 6px 0;" in html + assert ".toc-link" in html + + def test_report_shell_has_header_footer_and_page_numbers(self, report_html): + from reporting.html_shell import build_html + html = build_html("Fixture", report_html) + assert 'content: "TM Meraki Baseline";' in html + assert 'content: "Release 2026_5_3";' in html + assert 'content: "Page " counter(page) " of " counter(pages);' in html + + def test_release_and_end_report_page_rendered(self, report_html): + assert "v2026_5_3" in report_html + assert "End of Report" in report_html + assert "TM Meraki Baseline" in report_html + def test_current_state_assessment_present(self, report_html): assert "Current State Assessment" in report_html @@ -41,6 +70,64 @@ def test_recommended_priorities_present(self, report_html): def test_health_grid_present(self, report_html): assert "health-grid" in report_html + def test_exec_health_counts_security_status_case_insensitively(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "security_baseline.json").write_text( + json.dumps( + { + "checks": [ + {"check": "AMP", "status": "Pass"}, + {"check": "IPS", "status": "Warning"}, + {"check": "Port Forwarding", "status": "Pass"}, + ] + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Security Count Test", report_kind="exec") + assert "1 warn" in html + assert "2 checks passed" in html + assert "0 checks passed" not in html + + def test_exec_health_counts_nested_mx_uplinks(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "uplink_statuses.json").write_text( + json.dumps( + [ + { + "serial": "Q2XX-TEST-0001", + "model": "MX95", + "networkId": "N_test_001", + "uplinks": [ + {"interface": "wan1", "status": "active"}, + {"interface": "wan2", "status": "ready"}, + ], + } + ] + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "WAN Count Test", report_kind="exec") + assert "1 active" in html + assert "1 standby-ready" in html + assert "No WAN data" not in html + def test_security_section_present(self, report_html): assert "security-baseline" in report_html assert "Security Posture Summary" in report_html @@ -78,6 +165,546 @@ def test_wpc_topology_excluded(self, report_html): """Topology section should still exist even with empty LLDP fixture.""" assert "topology" in report_html + def test_sparse_data_sections_explain_missing_inputs(self, report_html): + assert "network_clients.json" in report_html + assert "Port telemetry available:" in report_html + assert "LLDP/CDP neighbor data available:" in report_html + + def test_switch_identity_and_poe_budget_reference_render(self, report_html): + assert "Switch Identity & PoE Budget Reference" in report_html + assert "Core-SW-1" in report_html + assert "Q2SW-TEST-0001" in report_html + assert "MS225-24P" in report_html + assert "370 W" in report_html + assert "MS225 Overview and Specifications" in report_html + + def test_poe_analysis_uses_catalog_budget_and_switch_labels(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "poe_power_summary.json").write_text( + json.dumps( + { + "switch_poe_totals": [ + { + "serial": "Q2SW-TEST-0001", + "avgWatts": 42.5, + "powerUsageInWh": 1020, + } + ], + "port_poe_totals": [], + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "PoE Test") + assert "PoE Budget Reference Coverage" in html + assert "they do not yet include authoritative switch maximum PoE budget values" not in html + assert "Known Budget" in html + assert "Headroom" in html + assert "Core-SW-1 (Q2SW-TEST-0001)" in html + assert "327.5 W" in html + + def test_expanded_hardware_catalog_renders_catalyst_poe_budget(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "devices_availabilities.json").write_text( + json.dumps( + [ + { + "serial": "CAT-1", + "name": "Catalyst Core", + "model": "C9300-48UXM", + "productType": "switch", + "status": "online", + "network": {"id": "N_test_001", "name": "Main"}, + } + ] + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Catalyst Test") + assert "Catalyst Core" in html + assert "490 W" in html + assert "Catalyst 9300-M Datasheet" in html + + def test_device_availability_records_are_enriched_from_inventory(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "devices_availabilities.json").write_text( + json.dumps( + [ + { + "serial": "Q2AP-TEST-0001", + "status": "online", + "productType": "wireless", + "network": {"id": "N_test_001", "name": "Main"}, + } + ] + ), + encoding="utf-8", + ) + (tmp_path / "inventory_devices.json").write_text( + json.dumps( + [ + { + "serial": "Q2AP-TEST-0001", + "name": "Library AP", + "model": "MR46", + "productType": "wireless", + "networkId": "N_test_001", + } + ] + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Inventory Merge Test") + assert "Library AP" in html + assert "MR46" in html + assert "Q2AP-TEST-0001Unknown model" not in html + + def test_k12_vlan_reference_renders_as_supplemental_guidance(self, report_html): + assert "K-12 VLAN Segmentation Reference" in report_html + assert "Teacher / Classroom Blocks" in report_html + assert "10.250.0.0/16" in report_html + assert "target architecture, not evidence of current compliance" in report_html + + def test_unifi_comparison_requires_pricing_reference(self, report_html): + assert "Pricing needed" in report_html + assert "$X" not in report_html + assert "$29/mo" not in report_html + + def test_unifi_comparison_labels_used_meraki_reference_pricing(self, report_html): + assert "Cisco/Meraki Used-Market Reference" in report_html + assert "NetworkTigers (used)" in report_html + assert "$239.99" in report_html + assert "Used-market" in report_html + + def test_unifi_comparison_breaks_out_migration_cost_confidence(self, report_html): + assert "Migration Cost Breakdown" in report_html + assert "Wireless AP hardware" in report_html + assert "Optics/transceivers" in report_html + assert "Public MSRP" in report_html + + def test_executive_summary_includes_data_confidence_snapshot(self, report_html): + assert "Data Confidence Snapshot" in report_html + assert "Client attachment detail" in report_html + assert "Firewall and filtering backup" in report_html + + def test_unifi_comparison_uses_org_local_pricing_json(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "inventory_summary.json").write_text( + json.dumps({"top_models": [["MS225-24P", 2]], "by_type": {"switch": 2}}), + encoding="utf-8", + ) + (tmp_path / "pricing.json").write_text( + json.dumps( + { + "unifi_equivalents": {"MS225": "USW Pro 24 PoE"}, + "models": { + "MS225": { + "meraki_unit_cost": 1000, + "unifi_unit_cost": 600, + } + }, + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Pricing Test") + assert "USW Pro 24 PoE" in html + assert "$2,000" in html + assert "$1,200" in html + assert "$800 (40% lower)" in html + + def test_disabled_unconfigured_ssids_are_collapsed(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "wireless_ssids.json").write_text( + json.dumps( + { + "N_test_001": [ + {"name": "Staff", "enabled": True, "authMode": "psk"}, + {"name": "Unconfigured SSID 2", "number": 1, "enabled": False, "authMode": "open"}, + {"name": "Unconfigured SSID 3", "number": 2, "enabled": False, "authMode": "open"}, + ] + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "SSID Test") + assert "Staff" in html + assert "Unconfigured SSID 2" not in html + assert "2 disabled default/unconfigured SSID slot(s) hidden." in html + + def test_switch_deep_dive_table_css_is_extra_dense(self, report_html): + from reporting.html_shell import build_html + html = build_html("Fixture", report_html) + assert "@page switch-detail" in html + assert "size: A4 landscape;" in html + assert "page: switch-detail;" in html + assert "table.data.switch-detail-table {" in html + assert "font-size: 4.2px;" in html + assert "padding: 0.2px 0.6px;" in html + assert "line-height: 0.95;" in html + assert "white-space: nowrap;" in html + assert "margin-left: 0;" in html + assert ".switch-detail-table .c-neighbor { width: 28%; }" in html + + def test_switch_deep_dive_uses_compact_column_labels(self, report_html): + assert "LabelHeatRoleStatSpdDupVLAN" in report_html + assert "DataKbpsPwrFlgNeighbor" in report_html + assert "Current Throughput" not in report_html + assert "Connected Device" not in report_html + + def test_client_overview_renders_when_wireless_clients_missing(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "wireless_clients.json").write_text("{}", encoding="utf-8") + (tmp_path / "clients_overview.json").write_text( + json.dumps( + { + "N_test_001": { + "counts": {"total": 42, "withHeavyUsage": 3}, + "usages": {"average": 2048, "withHeavyUsageAverage": 5345}, + } + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Client Test") + assert "Client Overview Summary" in html + assert "Total clients" in html + assert ">42<" in html + assert "Heavy-usage clients" in html + + def test_network_clients_render_wired_and_wireless_detail(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "network_clients.json").write_text( + json.dumps( + { + "N_test_001": [ + { + "description": "Teacher Laptop", + "mac": "00:11:22:33:44:55", + "recentDeviceConnection": "Wireless", + "recentDeviceName": "Library AP", + "ssid": "Faculty", + "vlan": "110", + "os": "macOS", + "usage": {"sent": 2000, "recv": 3000}, + }, + { + "description": "Office Printer", + "mac": "00:11:22:33:44:66", + "recentDeviceConnection": "Wired", + "recentDeviceName": "Core-SW-1", + "switchport": "5", + "vlan": "100", + "deviceTypePrediction": "Printer", + "usage": {"sent": 100, "recv": 200}, + }, + ] + } + ), + encoding="utf-8", + ) + (tmp_path / "wireless_clients.json").write_text("{}", encoding="utf-8") + + html = build_org_report(str(tmp_path), "Client Detail Test") + assert "Client detail source: network_clients.json" in html + assert "Clients by Connection Type" in html + assert "Wireless" in html + assert "Wired" in html + assert "Teacher Laptop" in html + assert "Office Printer" in html + assert "Core-SW-1" in html + + def test_addressing_and_dhcp_audit_renders_vlans_and_scope_utilization(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "appliance_vlans.json").write_text( + json.dumps( + { + "N_test_001": [ + { + "id": "110", + "name": "Faculty", + "subnet": "10.110.0.0/16", + "applianceIp": "10.110.0.1", + "dhcpHandling": "Run a DHCP server", + "dhcpLeaseTime": "1 day", + "dhcpRelayServerIps": [], + }, + { + "id": "20", + "name": "Facilities", + "subnet": "10.20.0.0/16", + "applianceIp": "10.20.0.1", + "dhcpHandling": "Relay DHCP to another server", + "dhcpRelayServerIps": ["10.10.0.5"], + }, + ] + } + ), + encoding="utf-8", + ) + (tmp_path / "appliance_dhcp_subnets.json").write_text( + json.dumps( + { + "Q2MX-TEST-0001": [ + { + "vlanId": 110, + "subnet": "10.110.0.0/16", + "usedCount": 900, + "freeCount": 100, + } + ] + } + ), + encoding="utf-8", + ) + (tmp_path / "network_clients.json").write_text( + json.dumps( + { + "N_test_001": [ + {"description": "Teacher Laptop", "vlan": "110", "ip": "10.110.4.15"}, + {"description": "Teacher iPad", "vlan": "110", "ip": "10.110.4.16"}, + ] + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Addressing Test") + assert "Addressing & DHCP Scope Audit" in html + assert "10.110.0.0/16" in html + assert "Run a DHCP server" in html + assert "Relay DHCP to another server" in html + assert "10.10.0.5" in html + assert "90.0% used" in html + assert "2" in html + + def test_appliance_policy_backup_renders_firewall_and_content_filtering(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "appliance_policy_backup.json").write_text( + json.dumps( + { + "N_test_001": { + "l3FirewallRules": { + "rules": [ + { + "comment": "Deny students to servers", + "policy": "deny", + "protocol": "tcp", + "srcCidr": "10.250.0.0/16", + "destCidr": "10.10.0.0/16", + "destPort": "445", + } + ], + "syslogDefaultRule": True, + }, + "l7FirewallRules": { + "rules": [ + {"policy": "deny", "type": "host", "value": "example.com"} + ] + }, + "inboundFirewallRules": {"rules": []}, + "portForwardingRules": { + "rules": [ + { + "name": "Camera NVR", + "protocol": "tcp", + "publicPort": "8443", + "localPort": "443", + "lanIp": "10.10.0.20", + } + ] + }, + "contentFiltering": { + "blockedUrlCategories": [{"id": "meraki:contentFiltering/category/1", "name": "Adult"}], + "allowedUrlPatterns": ["school.edu"], + "blockedUrlPatterns": ["bad.example"], + }, + "groupPolicies": [{"groupPolicyId": "101", "name": "Students"}], + "siteToSiteVpn": {"mode": "spoke"}, + "syslogServers": {"servers": [{"host": "10.10.0.50", "port": 514}]}, + } + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Policy Backup Test") + assert "MX Firewall, Filtering & Policy Backup" in html + assert "Deny students to servers" in html + assert "10.250.0.0/16" in html + assert "Camera NVR" in html + assert "Adult" in html + assert "1 cat / 1 allow / 1 block" in html + assert "spoke" in html + + def test_firmware_status_renders_current_and_available_versions(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "firmware_upgrades.json").write_text( + json.dumps( + [ + { + "network": {"id": "N_test_001", "name": "Main"}, + "products": {"wireless": True}, + "currentVersion": {"shortName": "MR 30.6"}, + "availableVersions": [ + {"shortName": "MR 31.1", "releaseType": "stable"}, + ], + "isUpgradeAvailable": True, + "upgradeStrategy": "minimizeUpgradeTime", + } + ] + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Firmware Test") + assert "Firmware Status & Available Versions" in html + assert "MR 30.6" in html + assert "MR 31.1" in html + assert "Upgrade Available" in html + + def test_firmware_status_skips_history_only_rows(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "firmware_upgrades.json").write_text( + json.dumps( + [ + { + "network": {"id": "N_test_001", "name": "Main"}, + "fromVersion": {"shortName": "MR 30.6"}, + "toVersion": {"shortName": "MR 31.1"}, + "productTypes": ["wireless"], + "status": "Completed", + "completedAt": "2026-03-21 01:00:00 UTC", + } + ] + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Firmware History Test") + assert "Firmware Status & Available Versions" not in html + assert "Recent Firmware Upgrades" in html + assert "MR 31.1" in html + + def test_eos_inventory_highlights_announced_and_two_year_dates(self, monkeypatch, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + monkeypatch.setenv("MERAKI_REPORT_FIXED_NOW", "2026-05-03T12:00:00") + (tmp_path / "inventory_devices.json").write_text( + json.dumps( + [ + { + "serial": "SW-RED", + "name": "EOS Soon", + "model": "MS220", + "networkId": "N_test_001", + "eox": {"status": "announced", "endOfSupportAt": "2027-05-03T00:00:00Z"}, + }, + { + "serial": "SW-YELLOW", + "name": "EOS Later", + "model": "MS225", + "networkId": "N_test_001", + "eox": {"status": "announced", "endOfSupportAt": "2029-05-03T00:00:00Z"}, + }, + ] + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "EOS Test") + assert 'class="row-eos-critical"' in html + assert 'class="row-eos-announced"' in html + def test_exec_summary_report_variant(self): from reporting.app import build_org_report html = build_org_report(FIXTURES, "Test Org", report_kind="exec") @@ -100,6 +727,160 @@ def test_dated_complete_report_filename(self): ) assert filename == "William_Penn_Charter_School_Complete_Report_2026-05-02.pdf" + def test_fixed_run_timestamp_makes_report_html_repeatable(self, monkeypatch): + from reporting.app import build_org_report + monkeypatch.setenv("MERAKI_REPORT_FIXED_NOW", "2026-05-02T21:30:00") + first = build_org_report(FIXTURES, "Test Org") + second = build_org_report(FIXTURES, "Test Org") + assert first == second + assert "Generated May 2, 2026 at 9:30 PM" in first + + def test_disconnected_ports_are_not_reported_as_issues(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "switch_port_statuses.json").write_text( + json.dumps( + { + "Q2SW-TEST-0001": [ + { + "portId": "1", + "status": "disconnected", + "errors": ["Port disconnected"], + "warnings": [], + "speed": "", + "duplex": "", + "poeMode": "auto", + "isUplink": False, + }, + { + "portId": "2", + "status": "connected", + "errors": ["CRC errors detected"], + "warnings": [], + "speed": "1 Gbps", + "duplex": "full", + "poeMode": "auto", + "isUplink": False, + }, + ] + }, + indent=2, + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Port Test") + assert "CRC errors detected" in html + assert "Core-SW-1" in html + assert "Port disconnected" not in html + + def test_disconnected_ports_do_not_get_deep_dive_error_badges(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "switch_port_statuses.json").write_text( + json.dumps( + { + "Q2SW-TEST-0001": [ + { + "portId": "1", + "status": "disconnected", + "errors": ["Port disconnected"], + "warnings": [], + "speed": "", + "duplex": "", + "poeMode": "auto", + "isUplink": False, + } + ] + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Disconnected Deep Dive Test") + assert "Port disconnected" not in html + assert ">1 error(s)<" not in html + + def test_100_gbps_ports_are_not_low_speed_warnings(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "switch_port_statuses.json").write_text( + json.dumps( + { + "Q2SW-TEST-0001": [ + { + "portId": "49", + "status": "connected", + "errors": [], + "warnings": [], + "speed": "100 Gbps", + "duplex": "full", + "isUplink": False, + } + ] + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Speed Test") + assert "100G" in html + assert 'badge-warn">100 Gbps' not in html + + def test_mesh_no_repeater_404_is_suppressed(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "wireless_mesh_statuses.json").write_text( + json.dumps( + { + "N_test_001": { + "error": 'HTTP 404 for https://api.meraki.com/api/v1/networks/N_test/wireless/meshStatuses: {"errors":["No MR repeaters found on this network"]}' + } + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "Mesh Test") + assert "No MR repeaters found" not in html + assert "Mesh Status Notes" not in html + + def test_ap_rows_do_not_render_empty_model_code_tags(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + html = build_org_report(str(tmp_path), "AP Model Test") + assert "" not in html + class TestHealthCardRatings: """Unit test the health domain scoring logic independently.""" diff --git a/tests/test_topology.py b/tests/test_topology.py new file mode 100644 index 0000000..557afeb --- /dev/null +++ b/tests/test_topology.py @@ -0,0 +1,84 @@ +"""Focused tests for topology SVG layout behavior.""" +import re + +from reporting.topology import _topo_pages, _topo_svg + + +def _switch(serial: str, name: str) -> dict: + return { + "serial": serial, + "name": name, + "model": "MS225-48FP", + "productType": "switch", + "status": "online", + } + + +def _uplink(child: str, parent: str, port: str = "49") -> tuple: + return ( + child, + { + "ports": { + port: { + "lldp": {"chassisId": parent, "portId": "1"}, + "cdp": {}, + } + } + }, + ) + + +def _label_x(svg: str, label: str) -> float: + match = re.search( + rf']*>{re.escape(label)}', + svg, + ) + assert match, f"Missing label {label!r}" + return float(match.group(1)) + + +def test_topology_orders_layers_by_parent_position_to_reduce_crossings(): + devices = [ + _switch("ROOT", "Root"), + _switch("B", "Z-B"), + _switch("C", "A-C"), + _switch("D", "A-D"), + _switch("E", "Z-E"), + ] + lldp = dict( + [ + _uplink("B", "ROOT"), + _uplink("C", "ROOT"), + _uplink("D", "B"), + _uplink("E", "C"), + ] + ) + ports = { + serial: [{"portId": "49", "isUplink": True, "speed": "1 Gbps"}] + for serial in ("B", "C", "D", "E") + } + + svg = _topo_svg(devices, lldp, {}, {}, ports, show_internet=False) + + assert _label_x(svg, "A-C") < _label_x(svg, "Z-B") + assert _label_x(svg, "Z-E") < _label_x(svg, "A-D") + + +def test_large_topology_overview_chunks_and_infers_distribution_parent_links(): + devices = [ + { + "serial": "MX1", + "name": "Firewall", + "model": "MX95", + "productType": "appliance", + "status": "online", + } + ] + [_switch(f"SW{i:02d}", f"Dist-{i:02d}") for i in range(13)] + + pages = _topo_pages(devices, {}, {}, {}, {}, enrichment={}) + + assert pages[0]["title"] == "Overview — Core / Distribution Layer (1/3)" + assert pages[1]["title"] == "Overview — Core / Distribution Layer (2/3)" + assert pages[2]["title"] == "Overview — Core / Distribution Layer (3/3)" + assert 'stroke-dasharray="5 4"' in pages[0]["svg"] + assert re.search(r']*width="1332"', pages[0]["svg"]) From d00ff45ee9cb49a263b176b902eb5f7985629825 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 12:30:40 -0400 Subject: [PATCH 02/47] Add UPS runtime planning to reports --- reporting/app.py | 286 ++++++++++++++++++ .../reference/ups_runtime_reference.json | 268 ++++++++++++++++ tests/test_report.py | 36 +++ 3 files changed, 590 insertions(+) create mode 100644 reporting/reference/ups_runtime_reference.json diff --git a/reporting/app.py b/reporting/app.py index d7ae5a9..3c5ac0f 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -53,6 +53,11 @@ "reference", "pricing_reference.json", ) +UPS_REFERENCE_PATH = os.path.join( + os.path.dirname(os.path.abspath(__file__)), + "reference", + "ups_runtime_reference.json", +) def _report_slug(name: str) -> str: @@ -100,6 +105,68 @@ def _load_pricing_payload(org_dir: str) -> Dict[str, Any]: ) +def _load_ups_payload(org_dir: str) -> Dict[str, Any]: + return ( + load_json(os.path.join(org_dir, "ups_runtime_reference.json")) + or load_json(os.path.join(BASE_DIR, "ups_runtime_reference.json")) + or load_json(UPS_REFERENCE_PATH) + or {} + ) + + +def _format_money(value: int | float | None) -> str: + if not isinstance(value, (int, float)) or isinstance(value, bool): + return "Pricing needed" + return f"${value:,.0f}" if float(value).is_integer() else f"${value:,.2f}" + + +def _format_runtime_minutes(minutes: float | None) -> str: + if not isinstance(minutes, (int, float)) or isinstance(minutes, bool): + return "Over UPS rating" + if minutes < 60: + return f"{minutes:.0f} min" if minutes >= 10 else f"{minutes:.1f} min" + hours = int(minutes // 60) + mins = int(round(minutes % 60)) + if mins == 60: + hours += 1 + mins = 0 + return f"{hours}h {mins:02d}m" + + +def _interpolate_runtime_minutes(points: Any, watts: float) -> float | None: + if not isinstance(points, list) or not isinstance(watts, (int, float)) or watts <= 0: + return None + cleaned = sorted( + ( + (float(p.get("watts")), float(p.get("minutes"))) + for p in points + if isinstance(p, dict) + and isinstance(p.get("watts"), (int, float)) + and isinstance(p.get("minutes"), (int, float)) + and p.get("watts") > 0 + and p.get("minutes") > 0 + ), + key=lambda pair: pair[0], + ) + if not cleaned: + return None + if watts <= cleaned[0][0]: + return cleaned[0][1] + if watts > cleaned[-1][0]: + return None + for (w1, m1), (w2, m2) in zip(cleaned, cleaned[1:]): + if w1 <= watts <= w2: + if watts == w1: + return m1 + if watts == w2: + return m2 + # Runtime curves are nonlinear. Log interpolation tracks UPS runtime charts + # better than a straight-line fit between sparse vendor chart points. + ratio = (math.log(watts) - math.log(w1)) / (math.log(w2) - math.log(w1)) + return math.exp(math.log(m1) + ratio * (math.log(m2) - math.log(m1))) + return None + + def _read_org_name(org_dir: str) -> str: name_file = os.path.join(org_dir, "org_name.txt") if os.path.exists(name_file): @@ -360,6 +427,7 @@ def _flatten_client_records(raw: Any) -> List[Dict[str, Any]]: appliance_policy_backup = load_json(os.path.join(org_dir, "appliance_policy_backup.json")) or {} pricing_payload = _load_pricing_payload(org_dir) hardware_catalog = _load_hardware_catalog(org_dir) + ups_payload = _load_ups_payload(org_dir) # switch_port_configs / statuses are {serial: [port, …]} dicts — flatten, # injecting switchSerial so downstream code can reference the parent switch. @@ -1080,6 +1148,7 @@ def _build_switch_summary_for_main_report() -> str: (4, "Traffic Flows & Bottleneck Analysis", "traffic-flows", ""), (5, "Device Health & Issues", "device-health", ""), (6, "PoE Power Analysis", "poe-analysis", ""), + ("6A", "Battery Backup Runtime Planning", "ups-runtime", ""), (7, "Security Baseline", "security-baseline", ""), (8, "Recommendations & Implementation Plan", "recommendations", ""), (9, "CIS 8 Controls Assessment", "cis8", ""), @@ -2754,6 +2823,222 @@ def _infer_product(*versions: Any) -> str: ) poe_html += "" + # ========================================================= + # SECTION 6A: UPS RUNTIME PLANNING + # ========================================================= + ups_meta = ups_payload.get("meta") if isinstance(ups_payload, dict) else {} + ups_products = ups_payload.get("products") if isinstance(ups_payload, dict) else {} + ups_assumptions = ( + ups_payload.get("switch_load_assumptions") if isinstance(ups_payload, dict) else {} + ) + ups_target_hours = ( + float(ups_meta.get("target_runtime_hours")) + if isinstance(ups_meta, dict) and isinstance(ups_meta.get("target_runtime_hours"), (int, float)) + else 10.0 + ) + ups_target_minutes = ups_target_hours * 60 + bx_ref = ups_products.get("BX1500M") if isinstance(ups_products, dict) else {} + smx_ref = ups_products.get("SMX2200RMLV2U") if isinstance(ups_products, dict) else {} + + def _estimated_switch_base_watts(model: str) -> float: + prefixes = ( + ups_assumptions.get("model_prefixes") + if isinstance(ups_assumptions, dict) and isinstance(ups_assumptions.get("model_prefixes"), dict) + else {} + ) + for prefix, watts in sorted(prefixes.items(), key=lambda item: len(str(item[0])), reverse=True): + if model.startswith(str(prefix)) and isinstance(watts, (int, float)): + return float(watts) + + fallback = ( + ups_assumptions.get("fallback_by_port_count") + if isinstance(ups_assumptions, dict) and isinstance(ups_assumptions.get("fallback_by_port_count"), dict) + else {} + ) + for port_count in ("48", "24", "8"): + if re.search(rf"(?:^|[-_]){port_count}(?:[A-Z-]|$)", model): + value = fallback.get(port_count) + if isinstance(value, (int, float)): + return float(value) + value = fallback.get("default") + return float(value) if isinstance(value, (int, float)) else 75.0 + + def _ups_runtime(product: Dict[str, Any], watts: float) -> float | None: + max_watts = product.get("max_watts") if isinstance(product, dict) else None + if isinstance(max_watts, (int, float)) and watts > float(max_watts): + return None + return _interpolate_runtime_minutes(product.get("runtime_points_minutes"), watts) + + def _smx_runtime_config(config: Dict[str, Any], watts: float) -> float | None: + max_watts = smx_ref.get("max_watts") if isinstance(smx_ref, dict) else None + if isinstance(max_watts, (int, float)) and watts > float(max_watts): + return None + return _interpolate_runtime_minutes(config.get("runtime_points_minutes"), watts) + + def _smx_stack_cost(external_count: int) -> float | None: + unit = smx_ref.get("unit_cost") if isinstance(smx_ref, dict) else None + ext = smx_ref.get("external_battery_unit_cost") if isinstance(smx_ref, dict) else None + if not isinstance(unit, (int, float)) or not isinstance(ext, (int, float)): + return None + return float(unit) + (external_count * float(ext)) + + ups_rows: List[List[str]] = [] + ups_loads: List[float] = [] + runtime_configs = ( + smx_ref.get("runtime_configurations") + if isinstance(smx_ref, dict) and isinstance(smx_ref.get("runtime_configurations"), list) + else [] + ) + for sw in sorted( + switch_devices, + key=lambda d: ( + str((d.get("network") or {}).get("name") or ""), + str(d.get("name") or d.get("model") or d.get("serial") or ""), + ), + ): + serial = str(sw.get("serial") or "") + model = str(sw.get("model") or "") + label = str(sw.get("name") or model or serial or "Unknown switch") + poe_data = poe_by_serial.get(serial, {}) if isinstance(poe_by_serial, dict) else {} + observed_poe = float(poe_data.get("avgWatts", 0) or 0) + base_watts = _estimated_switch_base_watts(model) + modeled_load = observed_poe + base_watts + ups_loads.append(modeled_load) + + bx_runtime = _ups_runtime(bx_ref, modeled_load) if isinstance(bx_ref, dict) else None + base_config = next( + ( + config for config in runtime_configs + if isinstance(config, dict) and int(config.get("external_battery_count") or 0) == 0 + ), + {}, + ) + smx_base_runtime = _smx_runtime_config(base_config, modeled_load) if base_config else None + target_config: Dict[str, Any] | None = None + target_runtime: float | None = None + for config in sorted( + [c for c in runtime_configs if isinstance(c, dict)], + key=lambda c: int(c.get("external_battery_count") or 0), + ): + runtime = _smx_runtime_config(config, modeled_load) + if runtime is not None and runtime >= ups_target_minutes: + target_config = config + target_runtime = runtime + break + target_external_count = ( + int(target_config.get("external_battery_count") or 0) + if target_config is not None + else 0 + ) + target_label = ( + str(target_config.get("label") or f"1 UPS + {target_external_count} external battery module(s)") + if target_config is not None + else "No listed stack reaches target" + ) + if target_config is not None and target_runtime is not None: + target_label = f"{target_label} ({_format_runtime_minutes(target_runtime)})" + + ups_rows.append( + [ + f"{label} ({serial})" if serial and label != serial else label, + model or "Unknown", + f"{observed_poe:.1f} W", + f"{base_watts:.1f} W", + f"{modeled_load:.1f} W", + _format_runtime_minutes(bx_runtime), + _format_runtime_minutes(smx_base_runtime), + target_label, + _format_money(_smx_stack_cost(target_external_count) if target_config is not None else None), + ] + ) + + avg_ups_load = sum(ups_loads) / len(ups_loads) if ups_loads else 0 + max_ups_load = max(ups_loads) if ups_loads else 0 + bx_max = bx_ref.get("max_watts") if isinstance(bx_ref, dict) else None + smx_max = smx_ref.get("max_watts") if isinstance(smx_ref, dict) else None + smx_unit = smx_ref.get("unit_cost") if isinstance(smx_ref, dict) else None + smx_ext = smx_ref.get("external_battery_unit_cost") if isinstance(smx_ref, dict) else None + ups_source_links = "" + if isinstance(ups_meta, dict) and isinstance(ups_meta.get("sources"), list): + links = [] + for source in ups_meta.get("sources", [])[:4]: + if not isinstance(source, dict) or not source.get("url"): + continue + links.append( + f'{_he(str(source.get("title") or source.get("url")))}' + ) + if links: + ups_source_links = "

    Runtime reference sources: " + "; ".join(links) + ".

    " + + ups_html = f""" +
    +

    6A. Battery Backup Runtime Planning

    +
    +
    +
    Switches Sized
    +
    {len(ups_rows)}
    +
    One UPS stack per listed switch load
    +
    +
    +
    Average Modeled Load
    +
    {avg_ups_load:.1f} W
    +
    Observed PoE + chassis estimate
    +
    +
    +
    Largest Modeled Load
    +
    {max_ups_load:.1f} W
    +
    Used for closet-level sizing checks
    +
    +
    +
    Planning Target
    +
    {ups_target_hours:g} hours
    +
    Smart-UPS external battery stack
    +
    +
    +
    +
    Sizing Method
    +
    + The estimate models each switch as Meraki-observed average PoE draw plus a conservative chassis/base load by switch family. BX1500M is treated as a small single-switch option + {f"rated to {float(bx_max):g} W" if isinstance(bx_max, (int, float)) else ""}; SMX2200RMLV2U is treated as the rack/tower option + {f"rated to {float(smx_max):g} W" if isinstance(smx_max, (int, float)) else ""} with {smx_ref.get("external_battery_sku", "external battery modules") if isinstance(smx_ref, dict) else "external battery modules"} for extended runtime. + Runtime varies with battery age, temperature, load mix, and calibration, so these are planning estimates rather than procurement guarantees. +
    +
    + """ + if ups_rows: + ups_html += render_section( + "UPS Runtime Estimate by Switch", + ups_rows, + headers=[ + "Switch", + "Model", + "Observed PoE Avg", + "Chassis Est.", + "Modeled Load", + "BX1500M ETA", + "SMX Base ETA", + f"SMX Stack for {ups_target_hours:g}h", + "Stack Cost", + ], + ) + ups_html += ( + '
    ' + '
    Pricing Reference
    ' + '
    ' + f'Smart-UPS X controller: {_format_money(smx_unit)} each. ' + f'External battery module: {_format_money(smx_ext)} each. ' + 'The BX1500M option is included for runtime planning, but no unit price was provided in the current reference data.' + f'{ups_source_links}' + '
    ' + ) + else: + ups_html += ( + '
    ' + '
    No switch inventory was available for UPS runtime planning.
    ' + "
    " + ) + ups_html += "
    " + # ========================================================= # SECTION 6: SECURITY & COMPLIANCE # ========================================================= @@ -4088,6 +4373,7 @@ def _phase_amount(*categories: str, field: str = "hardware") -> int: + traffic_html + issues_html + poe_html + + ups_html + security_html + recommendations_html + cis8_html diff --git a/reporting/reference/ups_runtime_reference.json b/reporting/reference/ups_runtime_reference.json new file mode 100644 index 0000000..05d5ace --- /dev/null +++ b/reporting/reference/ups_runtime_reference.json @@ -0,0 +1,268 @@ +{ + "meta": { + "name": "UPS runtime and cost planning reference", + "updated": "2026-05-05", + "currency": "USD", + "target_runtime_hours": 10, + "notes": [ + "Runtime estimates interpolate vendor/reseller runtime chart points by modeled watt load.", + "Modeled switch load equals observed Meraki PoE average plus a conservative chassis/base load estimate by model family.", + "UPS runtime varies with battery age, temperature, power factor, connected non-switch loads, and battery calibration. Validate with a field wattmeter and UPS runtime test before procurement." + ], + "sources": [ + { + "title": "APC BX1500M product page", + "url": "https://www.apc.com/us/en/product/BX1500M/apc-backups-pro-1500va-tower-120v-10-nema-515r-outlets-avr-lcd/" + }, + { + "title": "APC Smart-UPS X SMX2200RMLV2U product page", + "url": "https://www.apc.com/ca/en/product/SMX2200RMLV2U/apc-smartups-x-line-interactive-2200va-rack-tower-convertible-2u-120v-6x-515r%2B2x-520r-nema-smartslot-extended-runtime/" + }, + { + "title": "Runtime Chart for Smart-UPS X", + "url": "https://www.apcguard.com/runtime-chart-for-smart-ups-x.asp" + }, + { + "title": "APCGuard BX1500M runtime chart reference", + "url": "https://www.apcguard.com/BR1500MS.asp" + } + ] + }, + "switch_load_assumptions": { + "model_prefixes": { + "C9300": 120, + "MS390": 120, + "MS350": 105, + "MS250": 95, + "MS225-48": 85, + "MS225-24": 55, + "MS210-48": 85, + "MS210-24": 55, + "MS130-48": 85, + "MS130-24": 55, + "MS120-48": 75, + "MS120-24": 45, + "MS120-8": 20 + }, + "fallback_by_port_count": { + "8": 25, + "24": 55, + "48": 85, + "default": 75 + } + }, + "products": { + "BX1500M": { + "vendor": "APC", + "name": "APC Back-UPS Pro 1500VA", + "sku": "BX1500M", + "max_watts": 900, + "max_va": 1500, + "unit_cost": null, + "cost_note": "Unit cost not provided in the current planning prompt.", + "configuration_label": "1 tower UPS", + "runtime_points_minutes": [ + {"watts": 50, "minutes": 134}, + {"watts": 100, "minutes": 68}, + {"watts": 200, "minutes": 31.5}, + {"watts": 300, "minutes": 19.1}, + {"watts": 400, "minutes": 12.9}, + {"watts": 450, "minutes": 10.9}, + {"watts": 500, "minutes": 9.2}, + {"watts": 600, "minutes": 6.8}, + {"watts": 700, "minutes": 5.1}, + {"watts": 800, "minutes": 3.8}, + {"watts": 900, "minutes": 2.8} + ] + }, + "SMX2200RMLV2U": { + "vendor": "APC", + "name": "APC Smart-UPS X 2200VA Rack/Tower", + "sku": "SMX2200RMLV2U", + "max_watts": 1980, + "max_va": 2200, + "unit_cost": 2220.55, + "external_battery_sku": "SMX120RMBP2U", + "external_battery_name": "APC External Battery Pack for Smart-UPS Extended Run SMX-Series", + "external_battery_unit_cost": 1266.49, + "runtime_configurations": [ + { + "external_battery_count": 0, + "label": "1 UPS", + "runtime_points_minutes": [ + {"watts": 50, "minutes": 444}, + {"watts": 100, "minutes": 266}, + {"watts": 200, "minutes": 143}, + {"watts": 300, "minutes": 96}, + {"watts": 400, "minutes": 71}, + {"watts": 500, "minutes": 56}, + {"watts": 600, "minutes": 46}, + {"watts": 700, "minutes": 38}, + {"watts": 800, "minutes": 33}, + {"watts": 900, "minutes": 28}, + {"watts": 1000, "minutes": 25}, + {"watts": 1200, "minutes": 20}, + {"watts": 1400, "minutes": 16}, + {"watts": 1600, "minutes": 13}, + {"watts": 1800, "minutes": 11}, + {"watts": 1980, "minutes": 10} + ] + }, + { + "external_battery_count": 1, + "label": "1 UPS + 1 external battery module", + "runtime_points_minutes": [ + {"watts": 50, "minutes": 1483}, + {"watts": 100, "minutes": 894}, + {"watts": 200, "minutes": 486}, + {"watts": 300, "minutes": 329}, + {"watts": 400, "minutes": 246}, + {"watts": 500, "minutes": 195}, + {"watts": 600, "minutes": 161}, + {"watts": 700, "minutes": 137}, + {"watts": 800, "minutes": 118}, + {"watts": 900, "minutes": 104}, + {"watts": 1000, "minutes": 93}, + {"watts": 1200, "minutes": 76}, + {"watts": 1400, "minutes": 63}, + {"watts": 1600, "minutes": 55}, + {"watts": 1800, "minutes": 48}, + {"watts": 1980, "minutes": 43} + ] + }, + { + "external_battery_count": 2, + "label": "1 UPS + 2 external battery modules", + "runtime_points_minutes": [ + {"watts": 50, "minutes": 2592}, + {"watts": 100, "minutes": 1564}, + {"watts": 200, "minutes": 852}, + {"watts": 300, "minutes": 578}, + {"watts": 400, "minutes": 433}, + {"watts": 500, "minutes": 345}, + {"watts": 600, "minutes": 285}, + {"watts": 700, "minutes": 242}, + {"watts": 800, "minutes": 210}, + {"watts": 900, "minutes": 185}, + {"watts": 1000, "minutes": 165}, + {"watts": 1200, "minutes": 136}, + {"watts": 1400, "minutes": 114}, + {"watts": 1600, "minutes": 98}, + {"watts": 1800, "minutes": 86}, + {"watts": 1980, "minutes": 77} + ] + }, + { + "external_battery_count": 3, + "label": "1 UPS + 3 external battery modules", + "runtime_points_minutes": [ + {"watts": 50, "minutes": 3743}, + {"watts": 100, "minutes": 2259}, + {"watts": 200, "minutes": 1232}, + {"watts": 300, "minutes": 836}, + {"watts": 400, "minutes": 627}, + {"watts": 500, "minutes": 500}, + {"watts": 600, "minutes": 414}, + {"watts": 700, "minutes": 352}, + {"watts": 800, "minutes": 306}, + {"watts": 900, "minutes": 270}, + {"watts": 1000, "minutes": 241}, + {"watts": 1200, "minutes": 198}, + {"watts": 1400, "minutes": 167}, + {"watts": 1600, "minutes": 144}, + {"watts": 1800, "minutes": 127}, + {"watts": 1980, "minutes": 114} + ] + }, + { + "external_battery_count": 4, + "label": "1 UPS + 4 external battery modules", + "runtime_points_minutes": [ + {"watts": 50, "minutes": 4924}, + {"watts": 100, "minutes": 2972}, + {"watts": 200, "minutes": 1622}, + {"watts": 300, "minutes": 1100}, + {"watts": 400, "minutes": 827}, + {"watts": 500, "minutes": 659}, + {"watts": 600, "minutes": 546}, + {"watts": 700, "minutes": 464}, + {"watts": 800, "minutes": 404}, + {"watts": 900, "minutes": 356}, + {"watts": 1000, "minutes": 318}, + {"watts": 1200, "minutes": 262}, + {"watts": 1400, "minutes": 221}, + {"watts": 1600, "minutes": 191}, + {"watts": 1800, "minutes": 168}, + {"watts": 1980, "minutes": 151} + ] + }, + { + "external_battery_count": 6, + "label": "1 UPS + 6 external battery modules", + "runtime_points_minutes": [ + {"watts": 50, "minutes": 7355}, + {"watts": 100, "minutes": 4440}, + {"watts": 200, "minutes": 2424}, + {"watts": 300, "minutes": 1646}, + {"watts": 400, "minutes": 1237}, + {"watts": 500, "minutes": 986}, + {"watts": 600, "minutes": 817}, + {"watts": 700, "minutes": 696}, + {"watts": 800, "minutes": 605}, + {"watts": 900, "minutes": 534}, + {"watts": 1000, "minutes": 478}, + {"watts": 1200, "minutes": 393}, + {"watts": 1400, "minutes": 333}, + {"watts": 1600, "minutes": 288}, + {"watts": 1800, "minutes": 254}, + {"watts": 1980, "minutes": 228} + ] + }, + { + "external_battery_count": 8, + "label": "1 UPS + 8 external battery modules", + "runtime_points_minutes": [ + {"watts": 50, "minutes": 9854}, + {"watts": 100, "minutes": 5950}, + {"watts": 200, "minutes": 3249}, + {"watts": 300, "minutes": 2206}, + {"watts": 400, "minutes": 1658}, + {"watts": 500, "minutes": 1322}, + {"watts": 600, "minutes": 1096}, + {"watts": 700, "minutes": 934}, + {"watts": 800, "minutes": 812}, + {"watts": 900, "minutes": 717}, + {"watts": 1000, "minutes": 642}, + {"watts": 1200, "minutes": 528}, + {"watts": 1400, "minutes": 448}, + {"watts": 1600, "minutes": 388}, + {"watts": 1800, "minutes": 341}, + {"watts": 1980, "minutes": 308} + ] + }, + { + "external_battery_count": 10, + "label": "1 UPS + 10 external battery modules", + "runtime_points_minutes": [ + {"watts": 50, "minutes": 12408}, + {"watts": 100, "minutes": 7492}, + {"watts": 200, "minutes": 4092}, + {"watts": 300, "minutes": 2779}, + {"watts": 400, "minutes": 2089}, + {"watts": 500, "minutes": 1666}, + {"watts": 600, "minutes": 1381}, + {"watts": 700, "minutes": 1177}, + {"watts": 800, "minutes": 1024}, + {"watts": 900, "minutes": 904}, + {"watts": 1000, "minutes": 809}, + {"watts": 1200, "minutes": 666}, + {"watts": 1400, "minutes": 565}, + {"watts": 1600, "minutes": 490}, + {"watts": 1800, "minutes": 431}, + {"watts": 1980, "minutes": 389} + ] + } + ] + } + } +} diff --git a/tests/test_report.py b/tests/test_report.py index d3fb74b..edb0ea1 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -35,6 +35,7 @@ def test_toc_entries_link_to_report_sections(self, report_html): assert 'class="toc-link" href="#executive-summary"' in report_html assert 'class="toc-link" href="#network-overview"' in report_html assert 'class="toc-link" href="#config-coverage"' in report_html + assert 'class="toc-link" href="#ups-runtime"' in report_html assert 'class="toc-link" href="#switch-deep-dive"' in report_html assert 'class="toc-link" href="#unifi-comparison"' in report_html assert 'class="toc-link" href="#vlan-reference"' in report_html @@ -211,6 +212,41 @@ def test_poe_analysis_uses_catalog_budget_and_switch_labels(self, tmp_path): assert "Core-SW-1 (Q2SW-TEST-0001)" in html assert "327.5 W" in html + def test_ups_runtime_planning_uses_poe_and_apc_reference(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "poe_power_summary.json").write_text( + json.dumps( + { + "switch_poe_totals": [ + { + "serial": "Q2SW-TEST-0001", + "avgWatts": 42.5, + "powerUsageInWh": 1020, + } + ], + "port_poe_totals": [], + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "UPS Test") + assert "Battery Backup Runtime Planning" in html + assert "UPS Runtime Estimate by Switch" in html + assert "BX1500M ETA" in html + assert "SMX2200RMLV2U" in html + assert "Core-SW-1 (Q2SW-TEST-0001)" in html + assert "97.5 W" in html + assert "1 UPS + 1 external battery module" in html + assert "$3,487.04" in html + def test_expanded_hardware_catalog_renders_catalyst_poe_budget(self, tmp_path): from reporting.app import build_org_report From 5b2962aaa19cd5bfc8e5a10887cb9a7e8ba5c019 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 12:35:26 -0400 Subject: [PATCH 03/47] Add dedicated AP spectrum report --- reporting/app.py | 51 ++++++ reporting/html_shell.py | 21 +++ reporting/sections.py | 341 ++++++++++++++++++++++++++++++++++++++++ tests/test_pipeline.py | 5 + tests/test_report.py | 63 ++++++++ 5 files changed, 481 insertions(+) diff --git a/reporting/app.py b/reporting/app.py index 3c5ac0f..f1fa249 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -32,6 +32,7 @@ _build_appliance_policy_section, _build_budget_forecast_section, _build_config_coverage_section, + _build_ap_spectrum_report, _build_switch_detail_section, _build_wan_capacity_section, _is_low_speed_link, @@ -344,6 +345,40 @@ def generate_org_reports( html_targets.extend([latest_backup_html_alias, latest_backup_html_compat]) _cleanup_paths(tuple(path for path in html_targets if path)) + ap_spectrum_body = build_org_report(source_dir, org_name, report_kind="ap_spectrum") + ap_spectrum_html = build_html(f"{org_name} — AP Spectrum & Interference Report", ap_spectrum_body) + ap_spectrum_html_path = os.path.join(output_dir, f"{_slug}_{_stamp}_ap_spectrum_report.html") + ap_spectrum_pdf_path = os.path.join(output_dir, f"{_slug}_{_stamp}_ap_spectrum_report.pdf") + ap_spectrum_named_html_alias = os.path.join(output_dir, _dated_report_name(org_name, "AP_Spectrum", _run_ts, "html")) + ap_spectrum_named_pdf_alias = os.path.join(output_dir, _dated_report_name(org_name, "AP_Spectrum", _run_ts, "pdf")) + ap_spectrum_html_alias = os.path.join(output_dir, "report_ap_spectrum.html") + ap_spectrum_pdf_alias = os.path.join(output_dir, "report_ap_spectrum.pdf") + if latest_dir: + ap_spectrum_html_path = ap_spectrum_named_html_alias + ap_spectrum_pdf_path = ap_spectrum_named_pdf_alias + ap_spectrum_html_alias = None + ap_spectrum_pdf_alias = None + latest_ap_spectrum_html_alias = os.path.join(latest_dir, _dated_report_name(org_name, "AP_Spectrum", _run_ts, "html")) if latest_dir else None + latest_ap_spectrum_pdf_alias = os.path.join(latest_dir, _dated_report_name(org_name, "AP_Spectrum", _run_ts, "pdf")) if latest_dir else None + latest_ap_spectrum_html_compat = os.path.join(latest_dir, "report_ap_spectrum.html") if latest_dir else None + latest_ap_spectrum_pdf_compat = os.path.join(latest_dir, "report_ap_spectrum.pdf") if latest_dir else None + _write_text_aliases(ap_spectrum_html, (ap_spectrum_html_path, ap_spectrum_named_html_alias, ap_spectrum_html_alias)) + if latest_dir: + _write_text_aliases(ap_spectrum_html, (latest_ap_spectrum_html_alias, latest_ap_spectrum_html_compat)) + ap_spectrum_pdf_ok = write_pdf(ap_spectrum_html_path, ap_spectrum_pdf_path) + if ap_spectrum_pdf_ok: + _copy_existing(ap_spectrum_pdf_path, (ap_spectrum_named_pdf_alias, ap_spectrum_pdf_alias)) + if latest_dir: + _copy_existing(ap_spectrum_pdf_path, (latest_ap_spectrum_pdf_alias, latest_ap_spectrum_pdf_compat)) + log.info("AP Spectrum PDF → %s", ap_spectrum_named_pdf_alias) + else: + log.info("AP Spectrum HTML → %s (no PDF tool found)", ap_spectrum_html_path) + if not keep_html and ap_spectrum_pdf_ok: + html_targets = [ap_spectrum_html_path, ap_spectrum_named_html_alias, ap_spectrum_html_alias] + if latest_dir: + html_targets.extend([latest_ap_spectrum_html_alias, latest_ap_spectrum_html_compat]) + _cleanup_paths(tuple(path for path in html_targets if path)) + return 1 def build_org_report( @@ -1121,6 +1156,12 @@ def _build_switch_summary_for_main_report() -> str: wireless_stats, switch_port_statuses_by_switch, ) + ap_spectrum_html = _build_ap_spectrum_report( + devices_by_network, + channel_util, + wireless_stats, + rf_profiles, + ) config_coverage_html = _build_config_coverage_section(org_dir, networks) budget_forecast_html = _build_budget_forecast_section(inventory_summary, pricing_payload) wan_capacity_html = _build_wan_capacity_section( @@ -1199,6 +1240,7 @@ def _build_switch_summary_for_main_report() -> str: complete_report_name = _dated_report_name(org_name, "Complete", _now, "pdf") executive_report_name = _dated_report_name(org_name, "Executive_Summary", _now, "pdf") backup_report_name = _dated_report_name(org_name, "Backup_Settings", _now, "pdf") + ap_spectrum_report_name = _dated_report_name(org_name, "AP_Spectrum", _now, "pdf") report_guide_html = f"""
    @@ -1220,6 +1262,11 @@ def _build_switch_summary_for_main_report() -> str:
    Backup Settings
    +
    +
    Wireless RF
    +
    AP Spectrum
    + +
    Full Context
    Complete Report
    @@ -1231,6 +1278,7 @@ def _build_switch_summary_for_main_report() -> str: Leadership / FinanceExecutive Summary, Recommendations, Hardware Cost & Refresh PlanShows the largest risks, renewal/refresh pressure, and recommended timing without port-level detail. IT OperationsInventory, topology, client analysis, and switch summaryConnects device inventory, site layout, clients, and operational symptoms. + Wireless / Refresh PlanningAP Spectrum ReportProvides one AP page per unit with RF bubble, overlap candidates, transmit-power context, and replacement planning notes. Security / ComplianceSecurity Baseline, MX Firewall/Filtering Policy Backup, CIS 8 Controls, Configuration CoverageShows control posture and the exact backup evidence available for audit review. Implementation TeamBackup Settings ReportContains the detailed port/configuration appendix that supports remediation work. @@ -4389,6 +4437,7 @@ def _phase_amount(*categories: str, field: str = "hardware") -> int: + end_report_html ) exec_body = cover_html + _schema_banner + exec_html + report_guide_html + end_report_html + ap_spectrum_body = cover_html + _schema_banner + ap_spectrum_html + end_report_html backup_body = ( cover_html + _schema_banner @@ -4405,6 +4454,8 @@ def _phase_amount(*categories: str, field: str = "hardware") -> int: if report_kind == "exec": return exec_body + if report_kind in {"ap_spectrum", "ap-spectrum", "ap_interference"}: + return ap_spectrum_body if report_kind == "backup": return backup_body return full_body diff --git a/reporting/html_shell.py b/reporting/html_shell.py index 44fe84d..576c098 100644 --- a/reporting/html_shell.py +++ b/reporting/html_shell.py @@ -1229,6 +1229,27 @@ def build_html(doc_title: str, body: str) -> str: padding: 3px 6px; }} }} + .ap-unit-page {{ + page-break-before: always; + break-before: page; + }} + .ap-unit-page h2 {{ + font-size: 22px; + margin-bottom: 8px; + }} + .ap-unit-page .kpi-value {{ + font-size: 15px; + line-height: 1.25; + }} + @media print {{ + .ap-unit-page {{ + min-height: 92vh; + }} + .ap-unit-page table.data.dense th, + .ap-unit-page table.data.dense td {{ + padding: 3px 6px; + }} + }} /* ===================================================== NETWORK TOPOLOGY ===================================================== */ diff --git a/reporting/sections.py b/reporting/sections.py index 19adef0..835c269 100644 --- a/reporting/sections.py +++ b/reporting/sections.py @@ -722,6 +722,347 @@ def _build_ap_interference_section( """ +def _build_ap_spectrum_report( + devices_by_network: Dict[str, Dict[str, Any]], + channel_util: Any, + wireless_stats: Dict[str, Any], + rf_profiles: Any, +) -> str: + def _band_stats(row: Dict[str, Any]) -> Dict[str, Dict[str, float]]: + bands: Dict[str, Dict[str, float]] = {} + for band in row.get("byBand") or []: + if not isinstance(band, dict): + continue + band_key = str(band.get("band") or "?") + bands[band_key] = { + "wifi": float(((band.get("wifi") or {}).get("percentage")) or 0), + "non_wifi": float(((band.get("nonWifi") or {}).get("percentage")) or 0), + "total": float(((band.get("total") or {}).get("percentage")) or 0), + } + return bands + + def _bubble(stats: Dict[str, float] | None) -> Tuple[str, str]: + if not stats: + return ("No telemetry", "check-warning") + wifi = stats.get("wifi", 0.0) + total = stats.get("total", 0.0) + non_wifi = stats.get("non_wifi", 0.0) + if wifi >= 55 or total >= 75: + return ("WAY TOO CLOSE / saturated RF bubble", "check-fail") + if wifi >= 40 or total >= 60: + return ("Too close / co-channel pressure", "check-fail") + if wifi >= 25 or total >= 45 or non_wifi >= 15: + return ("Tight bubble / tune placement", "check-warning") + if wifi >= 10 or total >= 25: + return ("Within range / acceptable overlap", "check-pass") + return ("Clean bubble / no overlap symptom", "check-pass") + + def _power_context(net_id: str, band: str) -> str: + band_map = { + "2.4": "twoFourGhzSettings", + "5": "fiveGhzSettings", + "6": "sixGhzSettings", + } + profiles = rf_profiles.get(net_id) if isinstance(rf_profiles, dict) else None + if not isinstance(profiles, list) or not profiles: + return "RF profile power not available" + field = band_map.get(str(band)) + values = [] + names = [] + for profile in profiles: + if not isinstance(profile, dict): + continue + settings = profile.get(field) if field else None + if isinstance(settings, dict): + min_power = settings.get("minPower") + max_power = settings.get("maxPower") + if isinstance(min_power, (int, float)) or isinstance(max_power, (int, float)): + values.append((min_power, max_power)) + if profile.get("name"): + names.append(str(profile.get("name"))) + if not values: + return "RF profile power not available" + min_values = [float(v[0]) for v in values if isinstance(v[0], (int, float))] + max_values = [float(v[1]) for v in values if isinstance(v[1], (int, float))] + min_text = f"{min(min_values):.0f}-{max(min_values):.0f} dBm min" if min_values else "min n/a" + max_text = f"{min(max_values):.0f}-{max(max_values):.0f} dBm max" if max_values else "max n/a" + cap_note = "" + if max_values and max(max_values) <= 17: + cap_note = "; low power ceiling" + elif max_values and max(max_values) <= 22: + cap_note = "; moderate power ceiling" + elif max_values: + cap_note = "; high power ceiling" + profile_note = f" across {len(values)} RF profile(s)" + if names: + profile_note += f": {', '.join(names[:2])}{'…' if len(names) > 2 else ''}" + return f"{min_text}; {max_text}{cap_note}{profile_note}" + + def _client_stats(serial: str, net_id: str) -> Dict[str, int]: + for item in wireless_stats.get(net_id, []) if isinstance(wireless_stats, dict) else []: + if isinstance(item, dict) and item.get("serial") == serial: + conn = item.get("connectionStats") or {} + return { + "assoc": int(conn.get("assoc") or 0), + "auth": int(conn.get("auth") or 0), + "success": int(conn.get("success") or 0), + } + return {"assoc": 0, "auth": 0, "success": 0} + + util_by_serial = { + row.get("serial"): row + for row in channel_util + if isinstance(channel_util, list) and isinstance(row, dict) and row.get("serial") + } + ap_records: List[Dict[str, Any]] = [] + seen: set[str] = set() + for net_id, net_data in devices_by_network.items(): + for dev in net_data.get("devices", []): + if not isinstance(dev, dict) or dev.get("productType") != "wireless": + continue + serial = str(dev.get("serial") or "") + if not serial: + continue + seen.add(serial) + util = util_by_serial.get(serial) or {} + bands = _band_stats(util) if util else {} + worst_band = "" + worst_stats: Dict[str, float] | None = None + if bands: + worst_band, worst_stats = max( + bands.items(), + key=lambda item: ( + item[1].get("wifi", 0.0), + item[1].get("total", 0.0), + item[1].get("non_wifi", 0.0), + ), + ) + bubble_label, bubble_cls = _bubble(worst_stats) + clients = _client_stats(serial, net_id) + ap_records.append( + { + "site": net_data.get("name") or "Unassigned", + "network_id": net_id, + "name": dev.get("name") or serial, + "serial": serial, + "model": dev.get("model") or "", + "status": dev.get("status") or "unknown", + "bands": bands, + "worst_band": worst_band, + "worst_stats": worst_stats, + "bubble": bubble_label, + "bubble_cls": bubble_cls, + "clients": clients, + } + ) + + for serial, util in util_by_serial.items(): + if serial in seen: + continue + net_id = (util.get("network") or {}).get("id") or "unassigned" + net_data = devices_by_network.get(net_id, {"name": "Unassigned"}) + bands = _band_stats(util) + worst_band, worst_stats = ("", None) + if bands: + worst_band, worst_stats = max( + bands.items(), + key=lambda item: ( + item[1].get("wifi", 0.0), + item[1].get("total", 0.0), + item[1].get("non_wifi", 0.0), + ), + ) + bubble_label, bubble_cls = _bubble(worst_stats) + ap_records.append( + { + "site": net_data.get("name") or "Unassigned", + "network_id": net_id, + "name": serial, + "serial": serial, + "model": "", + "status": "unknown", + "bands": bands, + "worst_band": worst_band, + "worst_stats": worst_stats, + "bubble": bubble_label, + "bubble_cls": bubble_cls, + "clients": _client_stats(serial, net_id), + } + ) + + if not ap_records: + return """ +
    +

    AP Spectrum Availability & Interference Report

    +
    No wireless AP inventory was available for a dedicated RF report.
    +
    + """ + + with_telemetry = [ap for ap in ap_records if ap["bands"]] + high_pressure = [ap for ap in with_telemetry if "Too close" in ap["bubble"] or "WAY TOO CLOSE" in ap["bubble"]] + tight_pressure = [ap for ap in with_telemetry if "Tight" in ap["bubble"]] + no_telemetry = [ap for ap in ap_records if not ap["bands"]] + site_counts: Dict[str, Dict[str, int]] = {} + for ap in ap_records: + site = site_counts.setdefault(ap["site"], {"aps": 0, "high": 0, "tight": 0, "missing": 0}) + site["aps"] += 1 + if ap in high_pressure: + site["high"] += 1 + if ap in tight_pressure: + site["tight"] += 1 + if not ap["bands"]: + site["missing"] += 1 + + site_rows = "".join( + "" + f"{_he(site)}" + f"{counts['aps']}" + f"{counts['high']}" + f"{counts['tight']}" + f"{counts['missing']}" + "" + for site, counts in sorted(site_counts.items()) + ) + + def _candidate_rows(ap: Dict[str, Any]) -> str: + band = ap["worst_band"] + candidates = [] + for other in ap_records: + if other["serial"] == ap["serial"] or other["network_id"] != ap["network_id"] or not band: + continue + stats = other["bands"].get(band) + if not stats: + continue + bubble, cls = _bubble(stats) + candidates.append((stats.get("wifi", 0.0) + stats.get("total", 0.0), other, stats, bubble, cls)) + candidates.sort(key=lambda item: (-item[0], item[1]["name"])) + if not candidates: + return 'No same-site AP telemetry candidates were available for this affected band.' + rows = [] + for _, other, stats, bubble, cls in candidates[:6]: + rows.append( + "" + f"{_he(other['name'])}
    {_he(other['serial'])}" + f"{_he(other['model'] or 'Unknown')}" + f"{_he(band)} GHz" + f"{stats['wifi']:.1f}% Wi-Fi / {stats['total']:.1f}% total" + f"{_he(bubble)}" + f"{_he('Likely overlap candidate' if 'Too close' in bubble or 'WAY TOO CLOSE' in bubble else 'Within same RF domain; verify on floor plan')}" + "" + ) + return "".join(rows) + + def _band_rows(ap: Dict[str, Any]) -> str: + if not ap["bands"]: + return 'No per-band channel utilization was returned for this AP.' + rows = [] + for band, stats in sorted(ap["bands"].items(), key=lambda item: item[0]): + bubble, cls = _bubble(stats) + rows.append( + "" + f"{_he(band)} GHz" + f"{stats['wifi']:.1f}%" + f"{stats['non_wifi']:.1f}%" + f"{stats['total']:.1f}%" + f"{_he(bubble)}" + f"{_he(_power_context(ap['network_id'], band))}" + "" + ) + return "".join(rows) + + def _recommendation(ap: Dict[str, Any]) -> str: + stats = ap["worst_stats"] or {} + power = _power_context(ap["network_id"], ap["worst_band"]) + if "WAY TOO CLOSE" in ap["bubble"]: + return ( + "Treat this as a high-priority RF density problem. If the floor plan confirms " + "another AP is physically close, remove, disable, or relocate one AP before " + "adding replacement Wi-Fi 6/7 hardware. " + + power + ) + if "Too close" in ap["bubble"]: + return ( + "Review nearby AP placement, channel reuse, and transmit power. If this AP is already " + "running under a reduced power profile, removal or relocation is more likely to help " + "than increasing power. " + + power + ) + if stats.get("non_wifi", 0.0) >= 15: + return "Inspect for non-Wi-Fi noise sources near this AP before replacing hardware. New APs will still share the same noisy spectrum." + if not ap["bands"]: + return "Re-run the backup after the AP is online and reporting channel utilization; no RF decision should be made from missing telemetry alone." + return "No immediate removal recommendation from current telemetry. Keep this AP in the upgrade plan unless the floor plan shows unnecessary overlap." + + ap_pages = [] + for ap in sorted( + ap_records, + key=lambda item: ( + {"check-fail": 0, "check-warning": 1, "check-pass": 2}.get(item["bubble_cls"], 3), + item["site"], + item["name"], + ), + ): + stats = ap["worst_stats"] or {} + clients = ap["clients"] + ap_pages.append( + f""" +
    +

    {_he(ap['name'])}

    +

    {_he(ap['site'])}  |  {_he(ap['serial'])}  |  {_he(ap['model'] or 'Unknown model')}  |  status: {_he(ap['status'])}

    +
    +
    RF Bubble
    {_he(ap['bubble'])}
    Inferred from airtime telemetry
    +
    Worst Band
    {_he((ap['worst_band'] + ' GHz') if ap['worst_band'] else 'No data')}
    {stats.get('total', 0.0):.1f}% total utilization
    +
    Wi-Fi Airtime
    {stats.get('wifi', 0.0):.1f}%
    Co-channel / neighbor pressure signal
    +
    Client Events
    {clients['assoc']} assoc
    {clients['auth']} auth / {clients['success']} success
    +
    +
    +
    Assessment
    +
    + This page estimates AP-to-AP overlap from Meraki channel-utilization data. It does not claim measured physical distance; "too close" means the AP's RF bubble is showing airtime contention that commonly occurs when nearby APs, channels, or power levels overlap too aggressively. +
    +
    + + + {_band_rows(ap)} +
    BandWi-FiNon-Wi-FiTotalBubbleTransmit Power Context
    +

    Suspected Overlap Candidates

    + + + {_candidate_rows(ap)} +
    Nearby AP CandidateModelBandCandidate AirtimeBubbleContext
    +
    +
    Recommendation
    +
    {_he(_recommendation(ap))}
    +
    +
    + """ + ) + + return f""" +
    +

    AP Spectrum Availability & Interference Report

    +

    This dedicated RF report is designed for wireless refresh planning. It identifies APs whose spectrum is clean, APs that are merely within useful range of other radios, and APs whose airtime suggests tight or excessive overlap. Excessive overlap can reduce throughput, increase retries, slow roaming, and make a Wi-Fi 6/7 replacement look worse than it should if density and power are not corrected first.

    +
    +
    AP Pages
    {len(ap_records)}
    One page per AP unit
    +
    RF Telemetry
    {len(with_telemetry)}
    APs with channel utilization
    +
    Too Close
    {len(high_pressure)}
    High co-channel pressure
    +
    Missing Data
    {len(no_telemetry)}
    Offline/dormant/no channel data
    +
    + + + {site_rows} +
    SiteAPsToo CloseTight BubbleNo Telemetry
    +
    +
    How To Read The Bubble Scale
    +
    + Clean bubble means no current overlap symptom. Within range means normal overlap for roaming. Tight bubble means tune channel/power/placement. Too close and WAY TOO CLOSE mean the RF domain should be reviewed before adding or replacing APs; removal, relocation, lower power, or channel-width changes may be better than a one-for-one replacement. +
    +
    +
    + {''.join(ap_pages)} + """ + + def _build_wan_capacity_section( uplink_statuses: Any, appliance_uplinks_usage: Any, diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py index 8634baa..de79593 100644 --- a/tests/test_pipeline.py +++ b/tests/test_pipeline.py @@ -274,8 +274,10 @@ def fake_write_pdf(html_path, pdf_path): "--fixed-now", "2026-05-02T21:30:00", ]) == 0 assert (output / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() + assert (output / "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf").exists() assert (output / "Demo_Org_2026-05-02_2130_report.pdf").exists() assert (output / "report.pdf").exists() + assert (output / "report_ap_spectrum.pdf").exists() def test_reports_dir_writes_run_and_latest_without_html_when_pdf_only(self, monkeypatch, tmp_path): from reporting import app @@ -303,8 +305,11 @@ def fake_write_pdf(html_path, pdf_path): run_dir = reports / "Demo_Org" / "2026-05-02_2130" latest_dir = reports / "latest" / "Demo_Org" assert (run_dir / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() + assert (run_dir / "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf").exists() assert (latest_dir / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() + assert (latest_dir / "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf").exists() assert (latest_dir / "report.pdf").exists() + assert (latest_dir / "report_ap_spectrum.pdf").exists() assert not (run_dir / "report.pdf").exists() assert not (run_dir / "Demo_Org_2026-05-02_2130_report.pdf").exists() assert not list(run_dir.glob("*.html")) diff --git a/tests/test_report.py b/tests/test_report.py index edb0ea1..00b4e1d 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -753,6 +753,69 @@ def test_backup_settings_report_variant(self): assert "Network Overview" in html assert "Executive Summary" not in html + def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "channel_utilization_by_device.json").write_text( + json.dumps( + [ + { + "serial": "Q2AP-TEST-0001", + "network": {"id": "N_test_001"}, + "byBand": [ + { + "band": "5", + "wifi": {"percentage": 62}, + "nonWifi": {"percentage": 2}, + "total": {"percentage": 78}, + } + ], + }, + { + "serial": "Q2AP-TEST-0002", + "network": {"id": "N_test_001"}, + "byBand": [ + { + "band": "5", + "wifi": {"percentage": 44}, + "nonWifi": {"percentage": 1}, + "total": {"percentage": 61}, + } + ], + }, + ] + ), + encoding="utf-8", + ) + (tmp_path / "wireless_rf_profiles.json").write_text( + json.dumps( + { + "N_test_001": [ + { + "name": "Classroom Low Power", + "fiveGhzSettings": {"minPower": 8, "maxPower": 17}, + } + ] + } + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "AP Spectrum Test", report_kind="ap_spectrum") + assert "AP Spectrum Availability & Interference Report" in html + assert html.count("ap-unit-page") >= 2 + assert "WAY TOO CLOSE / saturated RF bubble" in html + assert "Suspected Overlap Candidates" in html + assert "Classroom Low Power" in html + assert "remove, disable, or relocate one AP" in html + assert "Executive Summary" not in html + def test_dated_complete_report_filename(self): from reporting.app import _dated_report_name filename = _dated_report_name( From 503df592ad6ea08698dd868f2598c5d3e69c0386 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 12:43:09 -0400 Subject: [PATCH 04/47] Add buffered UPS power planning JSON --- reporting/app.py | 497 +++++++++++++++++++++++++++++++---------- tests/test_pipeline.py | 6 + tests/test_report.py | 41 ++++ 3 files changed, 424 insertions(+), 120 deletions(-) diff --git a/reporting/app.py b/reporting/app.py index f1fa249..4fc9b48 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -1,5 +1,6 @@ #!/usr/bin/env python3 import argparse +import json import logging import math import os @@ -59,6 +60,7 @@ "reference", "ups_runtime_reference.json", ) +UPS_LOAD_BUFFER_RATIO = 0.10 def _report_slug(name: str) -> str: @@ -168,6 +170,319 @@ def _interpolate_runtime_minutes(points: Any, watts: float) -> float | None: return None +def _catalog_poe_budget(hardware_catalog: Dict[str, Any], model: str) -> int | float | None: + models = ( + hardware_catalog.get("models") + if isinstance(hardware_catalog, dict) and isinstance(hardware_catalog.get("models"), dict) + else {} + ) + ref = models.get(model) or {} + budget = ref.get("poeBudgetWatts") if isinstance(ref, dict) else None + return budget if isinstance(budget, (int, float)) else None + + +def _estimated_switch_base_watts(model: str, ups_assumptions: Dict[str, Any]) -> tuple[float, str]: + prefixes = ( + ups_assumptions.get("model_prefixes") + if isinstance(ups_assumptions, dict) and isinstance(ups_assumptions.get("model_prefixes"), dict) + else {} + ) + for prefix, watts in sorted(prefixes.items(), key=lambda item: len(str(item[0])), reverse=True): + if model.startswith(str(prefix)) and isinstance(watts, (int, float)): + return float(watts), f"model prefix {prefix}" + + fallback = ( + ups_assumptions.get("fallback_by_port_count") + if isinstance(ups_assumptions, dict) and isinstance(ups_assumptions.get("fallback_by_port_count"), dict) + else {} + ) + for port_count in ("48", "24", "8"): + if re.search(rf"(?:^|[-_]){port_count}(?:[A-Z-]|$)", model): + value = fallback.get(port_count) + if isinstance(value, (int, float)): + return float(value), f"port-count fallback {port_count}" + value = fallback.get("default") + if isinstance(value, (int, float)): + return float(value), "default fallback" + return 75.0, "built-in default fallback" + + +def _ups_runtime(product: Dict[str, Any], watts: float) -> float | None: + max_watts = product.get("max_watts") if isinstance(product, dict) else None + if isinstance(max_watts, (int, float)) and watts > float(max_watts): + return None + return _interpolate_runtime_minutes(product.get("runtime_points_minutes"), watts) + + +def _smx_runtime_config(smx_ref: Dict[str, Any], config: Dict[str, Any], watts: float) -> float | None: + max_watts = smx_ref.get("max_watts") if isinstance(smx_ref, dict) else None + if isinstance(max_watts, (int, float)) and watts > float(max_watts): + return None + return _interpolate_runtime_minutes(config.get("runtime_points_minutes"), watts) + + +def _smx_stack_cost(smx_ref: Dict[str, Any], external_count: int) -> float | None: + unit = smx_ref.get("unit_cost") if isinstance(smx_ref, dict) else None + ext = smx_ref.get("external_battery_unit_cost") if isinstance(smx_ref, dict) else None + if not isinstance(unit, (int, float)) or not isinstance(ext, (int, float)): + return None + return float(unit) + (external_count * float(ext)) + + +def _round_watts(value: float) -> float: + return math.ceil(float(value) * 10) / 10 + + +def _build_ups_power_plan( + org_name: str, + switch_devices: List[Dict[str, Any]], + poe_by_serial: Dict[str, Dict[str, Any]], + ups_payload: Dict[str, Any], + hardware_catalog: Dict[str, Any], + run_ts: datetime, +) -> Dict[str, Any]: + ups_meta = ups_payload.get("meta") if isinstance(ups_payload, dict) else {} + ups_products = ups_payload.get("products") if isinstance(ups_payload, dict) else {} + ups_assumptions = ( + ups_payload.get("switch_load_assumptions") if isinstance(ups_payload, dict) else {} + ) + target_hours = ( + float(ups_meta.get("target_runtime_hours")) + if isinstance(ups_meta, dict) and isinstance(ups_meta.get("target_runtime_hours"), (int, float)) + else 10.0 + ) + target_minutes = target_hours * 60 + bx_ref = ups_products.get("BX1500M") if isinstance(ups_products, dict) else {} + smx_ref = ups_products.get("SMX2200RMLV2U") if isinstance(ups_products, dict) else {} + runtime_configs = ( + smx_ref.get("runtime_configurations") + if isinstance(smx_ref, dict) and isinstance(smx_ref.get("runtime_configurations"), list) + else [] + ) + base_config = next( + ( + config for config in runtime_configs + if isinstance(config, dict) and int(config.get("external_battery_count") or 0) == 0 + ), + {}, + ) + + switches: List[Dict[str, Any]] = [] + for sw in sorted( + switch_devices, + key=lambda d: ( + str((d.get("network") or {}).get("name") or ""), + str(d.get("name") or d.get("model") or d.get("serial") or ""), + ), + ): + serial = str(sw.get("serial") or "") + model = str(sw.get("model") or "") + label = str(sw.get("name") or model or serial or "Unknown switch") + network = sw.get("network") if isinstance(sw.get("network"), dict) else {} + poe_data = poe_by_serial.get(serial, {}) if isinstance(poe_by_serial, dict) else {} + observed_poe = float(poe_data.get("avgWatts", 0) or 0) + chassis_watts, chassis_source = _estimated_switch_base_watts(model, ups_assumptions) + modeled_load = observed_poe + chassis_watts + buffer_watts = modeled_load * UPS_LOAD_BUFFER_RATIO + sizing_load = modeled_load + buffer_watts + + bx_runtime = _ups_runtime(bx_ref, sizing_load) if isinstance(bx_ref, dict) else None + smx_base_runtime = ( + _smx_runtime_config(smx_ref, base_config, sizing_load) + if isinstance(smx_ref, dict) and base_config + else None + ) + target_config: Dict[str, Any] | None = None + target_runtime: float | None = None + for config in sorted( + [c for c in runtime_configs if isinstance(c, dict)], + key=lambda c: int(c.get("external_battery_count") or 0), + ): + runtime = _smx_runtime_config(smx_ref, config, sizing_load) + if runtime is not None and runtime >= target_minutes: + target_config = config + target_runtime = runtime + break + target_external_count = ( + int(target_config.get("external_battery_count") or 0) + if target_config is not None + else None + ) + target_label = ( + str(target_config.get("label") or f"1 UPS + {target_external_count} external battery module(s)") + if target_config is not None + else "No listed stack reaches target" + ) + target_cost = ( + _smx_stack_cost(smx_ref, target_external_count) + if isinstance(target_external_count, int) + else None + ) + switches.append( + { + "siteName": network.get("name") or "Unassigned", + "networkId": network.get("id") or sw.get("networkId"), + "switchName": label, + "serial": serial, + "model": model or "Unknown", + "status": sw.get("status") or "unknown", + "observedPoeAvgWatts": _round_watts(observed_poe), + "observedPoeSource": "poe_power_summary.json avgWatts" if poe_data else "not observed; treated as 0 W", + "chassisEstimateWatts": _round_watts(chassis_watts), + "chassisEstimateSource": chassis_source, + "knownPoeBudgetWatts": _catalog_poe_budget(hardware_catalog, model), + "baseModeledLoadWatts": _round_watts(modeled_load), + "bufferRatio": UPS_LOAD_BUFFER_RATIO, + "bufferWatts": _round_watts(buffer_watts), + "sizingLoadWatts": _round_watts(sizing_load), + "runtimeEstimates": { + "BX1500M": { + "runtimeMinutes": round(bx_runtime, 1) if bx_runtime is not None else None, + "runtimeLabel": _format_runtime_minutes(bx_runtime), + }, + "SMX2200RMLV2UBase": { + "runtimeMinutes": round(smx_base_runtime, 1) if smx_base_runtime is not None else None, + "runtimeLabel": _format_runtime_minutes(smx_base_runtime), + }, + "SMX2200RMLV2UTargetStack": { + "targetRuntimeHours": target_hours, + "label": target_label, + "externalBatteryCount": target_external_count, + "runtimeMinutes": round(target_runtime, 1) if target_runtime is not None else None, + "runtimeLabel": _format_runtime_minutes(target_runtime), + "estimatedCost": round(target_cost, 2) if isinstance(target_cost, (int, float)) else None, + "estimatedCostLabel": _format_money(target_cost), + }, + }, + } + ) + + site_summary: Dict[str, Dict[str, Any]] = {} + for item in switches: + site = site_summary.setdefault( + item["siteName"], + {"switchCount": 0, "totalSizingLoadWatts": 0.0, "maxSizingLoadWatts": 0.0}, + ) + site["switchCount"] += 1 + site["totalSizingLoadWatts"] += float(item["sizingLoadWatts"]) + site["maxSizingLoadWatts"] = max(site["maxSizingLoadWatts"], float(item["sizingLoadWatts"])) + for site in site_summary.values(): + site["totalSizingLoadWatts"] = _round_watts(site["totalSizingLoadWatts"]) + site["maxSizingLoadWatts"] = _round_watts(site["maxSizingLoadWatts"]) + + sizing_loads = [float(item["sizingLoadWatts"]) for item in switches] + base_loads = [float(item["baseModeledLoadWatts"]) for item in switches] + return { + "schemaVersion": 1, + "orgName": org_name, + "generatedAt": run_ts.isoformat(), + "sourceFiles": [ + "devices_availabilities.json", + "inventory_devices.json", + "devices_statuses.json", + "poe_power_summary.json", + "reporting/reference/ups_runtime_reference.json", + "reporting/reference/meraki_hardware_catalog.json", + ], + "planningAssumptions": { + "loadBufferRatio": UPS_LOAD_BUFFER_RATIO, + "loadBufferPercent": int(UPS_LOAD_BUFFER_RATIO * 100), + "modeledLoadFormula": "observed Meraki PoE average + switch chassis/base estimate", + "sizingLoadFormula": "modeled load * 1.10", + "targetRuntimeHours": target_hours, + "runtimeInterpolation": "log interpolation across maintained UPS runtime chart points", + }, + "summary": { + "switchCount": len(switches), + "averageBaseModeledLoadWatts": _round_watts(sum(base_loads) / len(base_loads)) if base_loads else 0, + "averageSizingLoadWatts": _round_watts(sum(sizing_loads) / len(sizing_loads)) if sizing_loads else 0, + "maxSizingLoadWatts": _round_watts(max(sizing_loads)) if sizing_loads else 0, + "totalSizingLoadWatts": _round_watts(sum(sizing_loads)) if sizing_loads else 0, + }, + "sites": dict(sorted(site_summary.items())), + "switches": switches, + } + + +def _load_ups_power_plan_from_org(org_dir: str, org_name: str, run_ts: datetime) -> Dict[str, Any]: + devices_avail = load_json(os.path.join(org_dir, "devices_availabilities.json")) or [] + inventory_devices = load_json(os.path.join(org_dir, "inventory_devices.json")) or [] + devices_statuses_raw = load_json(os.path.join(org_dir, "devices_statuses.json")) or [] + networks = load_json(os.path.join(org_dir, "networks.json")) or [] + poe_summary = load_json(os.path.join(org_dir, "poe_power_summary.json")) or {} + hardware_catalog = _load_hardware_catalog(org_dir) + ups_payload = _load_ups_payload(org_dir) + network_names = { + n.get("id"): n.get("name", n.get("id", "")) + for n in networks + if isinstance(n, dict) and n.get("id") + } + + metadata_by_serial: Dict[str, Dict[str, Any]] = {} + for source in (inventory_devices, devices_statuses_raw): + if not isinstance(source, list): + continue + for entry in source: + if not isinstance(entry, dict) or not entry.get("serial"): + continue + serial = entry["serial"] + merged = metadata_by_serial.setdefault(serial, {}) + for key in ("name", "model", "sku", "mac", "productType", "networkId", "tags", "lanIp"): + if not merged.get(key) and entry.get(key): + merged[key] = entry[key] + + enriched: List[Dict[str, Any]] = [] + seen: set[str] = set() + for device in devices_avail if isinstance(devices_avail, list) else []: + if not isinstance(device, dict): + continue + serial = device.get("serial") + if serial: + seen.add(serial) + merged = dict(device) + for key, value in metadata_by_serial.get(serial, {}).items(): + if not merged.get(key) and value: + merged[key] = value + net_id = merged.get("networkId") or (merged.get("network") or {}).get("id") + if net_id and not merged.get("network"): + merged["network"] = {"id": net_id, "name": network_names.get(net_id, net_id)} + elif net_id and isinstance(merged.get("network"), dict) and not merged["network"].get("name"): + merged["network"]["name"] = network_names.get(net_id, net_id) + enriched.append(merged) + + for serial, meta in sorted(metadata_by_serial.items()): + if serial in seen: + continue + device = dict(meta) + device["serial"] = serial + device.setdefault("status", "unknown") + net_id = device.get("networkId") + if net_id and not device.get("network"): + device["network"] = {"id": net_id, "name": network_names.get(net_id, net_id)} + elif net_id and isinstance(device.get("network"), dict) and not device["network"].get("name"): + device["network"]["name"] = network_names.get(net_id, net_id) + enriched.append(device) + + switch_devices = [ + d for d in enriched + if isinstance(d, dict) and d.get("productType") == "switch" + ] + poe_switches = ( + poe_summary.get("switch_poe_totals", []) + if isinstance(poe_summary, dict) + else [] + ) + poe_by_serial = {s.get("serial", ""): s for s in poe_switches if isinstance(s, dict)} + return _build_ups_power_plan( + org_name, + switch_devices, + poe_by_serial, + ups_payload, + hardware_catalog, + run_ts, + ) + + def _read_org_name(org_dir: str) -> str: name_file = os.path.join(org_dir, "org_name.txt") if os.path.exists(name_file): @@ -194,6 +509,16 @@ def _write_text_aliases(html: str, paths: tuple[str | None, ...]) -> None: f.write(html) +def _write_json_aliases(payload: Dict[str, Any], paths: tuple[str | None, ...]) -> None: + for path in paths: + if not path: + continue + os.makedirs(os.path.dirname(path), exist_ok=True) + with open(path, "w", encoding="utf-8") as f: + json.dump(payload, f, indent=2, sort_keys=True) + f.write("\n") + + def _copy_existing(src: str, destinations: tuple[str | None, ...]) -> None: for dst in destinations: if not dst or os.path.abspath(src) == os.path.abspath(dst): @@ -241,6 +566,19 @@ def generate_org_reports( log.info("Generating report for: %s", org_name) _slug = _report_slug(org_name) _stamp = _run_ts.strftime("%Y-%m-%d_%H%M") + ups_power_plan = _load_ups_power_plan_from_org(source_dir, org_name, _run_ts) + ups_plan_named_json = os.path.join(output_dir, _dated_report_name(org_name, "UPS_Switch_Power_Plan", _run_ts, "json")) + ups_plan_json = os.path.join(output_dir, "ups_switch_power_plan.json") + latest_ups_plan_named_json = ( + os.path.join(latest_dir, _dated_report_name(org_name, "UPS_Switch_Power_Plan", _run_ts, "json")) + if latest_dir + else None + ) + latest_ups_plan_json = os.path.join(latest_dir, "ups_switch_power_plan.json") if latest_dir else None + _write_json_aliases( + ups_power_plan, + (ups_plan_named_json, ups_plan_json, latest_ups_plan_named_json, latest_ups_plan_json), + ) body = build_org_report(source_dir, org_name) html = build_html(f"{org_name} — Network Health Report", body) @@ -536,6 +874,8 @@ def _merge_device_metadata() -> List[Dict]: "id": net_id, "name": network_names.get(net_id, net_id), } + elif net_id and isinstance(merged.get("network"), dict) and not merged["network"].get("name"): + merged["network"]["name"] = network_names.get(net_id, net_id) enriched.append(merged) # Keep inventory-only devices visible instead of silently dropping them. @@ -551,6 +891,8 @@ def _merge_device_metadata() -> List[Dict]: "id": net_id, "name": network_names.get(net_id, net_id), } + elif net_id and isinstance(device.get("network"), dict) and not device["network"].get("name"): + device["network"]["name"] = network_names.get(net_id, net_id) enriched.append(device) return enriched @@ -2874,134 +3216,47 @@ def _infer_product(*versions: Any) -> str: # ========================================================= # SECTION 6A: UPS RUNTIME PLANNING # ========================================================= + ups_power_plan = _build_ups_power_plan( + org_name, + switch_devices, + poe_by_serial, + ups_payload, + hardware_catalog, + _now, + ) ups_meta = ups_payload.get("meta") if isinstance(ups_payload, dict) else {} ups_products = ups_payload.get("products") if isinstance(ups_payload, dict) else {} - ups_assumptions = ( - ups_payload.get("switch_load_assumptions") if isinstance(ups_payload, dict) else {} - ) - ups_target_hours = ( - float(ups_meta.get("target_runtime_hours")) - if isinstance(ups_meta, dict) and isinstance(ups_meta.get("target_runtime_hours"), (int, float)) - else 10.0 - ) - ups_target_minutes = ups_target_hours * 60 bx_ref = ups_products.get("BX1500M") if isinstance(ups_products, dict) else {} smx_ref = ups_products.get("SMX2200RMLV2U") if isinstance(ups_products, dict) else {} - def _estimated_switch_base_watts(model: str) -> float: - prefixes = ( - ups_assumptions.get("model_prefixes") - if isinstance(ups_assumptions, dict) and isinstance(ups_assumptions.get("model_prefixes"), dict) - else {} - ) - for prefix, watts in sorted(prefixes.items(), key=lambda item: len(str(item[0])), reverse=True): - if model.startswith(str(prefix)) and isinstance(watts, (int, float)): - return float(watts) - - fallback = ( - ups_assumptions.get("fallback_by_port_count") - if isinstance(ups_assumptions, dict) and isinstance(ups_assumptions.get("fallback_by_port_count"), dict) - else {} - ) - for port_count in ("48", "24", "8"): - if re.search(rf"(?:^|[-_]){port_count}(?:[A-Z-]|$)", model): - value = fallback.get(port_count) - if isinstance(value, (int, float)): - return float(value) - value = fallback.get("default") - return float(value) if isinstance(value, (int, float)) else 75.0 - - def _ups_runtime(product: Dict[str, Any], watts: float) -> float | None: - max_watts = product.get("max_watts") if isinstance(product, dict) else None - if isinstance(max_watts, (int, float)) and watts > float(max_watts): - return None - return _interpolate_runtime_minutes(product.get("runtime_points_minutes"), watts) - - def _smx_runtime_config(config: Dict[str, Any], watts: float) -> float | None: - max_watts = smx_ref.get("max_watts") if isinstance(smx_ref, dict) else None - if isinstance(max_watts, (int, float)) and watts > float(max_watts): - return None - return _interpolate_runtime_minutes(config.get("runtime_points_minutes"), watts) - - def _smx_stack_cost(external_count: int) -> float | None: - unit = smx_ref.get("unit_cost") if isinstance(smx_ref, dict) else None - ext = smx_ref.get("external_battery_unit_cost") if isinstance(smx_ref, dict) else None - if not isinstance(unit, (int, float)) or not isinstance(ext, (int, float)): - return None - return float(unit) + (external_count * float(ext)) - ups_rows: List[List[str]] = [] - ups_loads: List[float] = [] - runtime_configs = ( - smx_ref.get("runtime_configurations") - if isinstance(smx_ref, dict) and isinstance(smx_ref.get("runtime_configurations"), list) - else [] - ) - for sw in sorted( - switch_devices, - key=lambda d: ( - str((d.get("network") or {}).get("name") or ""), - str(d.get("name") or d.get("model") or d.get("serial") or ""), - ), - ): - serial = str(sw.get("serial") or "") - model = str(sw.get("model") or "") - label = str(sw.get("name") or model or serial or "Unknown switch") - poe_data = poe_by_serial.get(serial, {}) if isinstance(poe_by_serial, dict) else {} - observed_poe = float(poe_data.get("avgWatts", 0) or 0) - base_watts = _estimated_switch_base_watts(model) - modeled_load = observed_poe + base_watts - ups_loads.append(modeled_load) - - bx_runtime = _ups_runtime(bx_ref, modeled_load) if isinstance(bx_ref, dict) else None - base_config = next( - ( - config for config in runtime_configs - if isinstance(config, dict) and int(config.get("external_battery_count") or 0) == 0 - ), - {}, - ) - smx_base_runtime = _smx_runtime_config(base_config, modeled_load) if base_config else None - target_config: Dict[str, Any] | None = None - target_runtime: float | None = None - for config in sorted( - [c for c in runtime_configs if isinstance(c, dict)], - key=lambda c: int(c.get("external_battery_count") or 0), - ): - runtime = _smx_runtime_config(config, modeled_load) - if runtime is not None and runtime >= ups_target_minutes: - target_config = config - target_runtime = runtime - break - target_external_count = ( - int(target_config.get("external_battery_count") or 0) - if target_config is not None - else 0 - ) - target_label = ( - str(target_config.get("label") or f"1 UPS + {target_external_count} external battery module(s)") - if target_config is not None - else "No listed stack reaches target" - ) - if target_config is not None and target_runtime is not None: - target_label = f"{target_label} ({_format_runtime_minutes(target_runtime)})" - + for item in ups_power_plan.get("switches", []): + target_stack = (item.get("runtimeEstimates") or {}).get("SMX2200RMLV2UTargetStack") or {} + target_label = str(target_stack.get("label") or "No listed stack reaches target") + if target_stack.get("runtimeLabel"): + target_label = f"{target_label} ({target_stack.get('runtimeLabel')})" ups_rows.append( [ - f"{label} ({serial})" if serial and label != serial else label, - model or "Unknown", - f"{observed_poe:.1f} W", - f"{base_watts:.1f} W", - f"{modeled_load:.1f} W", - _format_runtime_minutes(bx_runtime), - _format_runtime_minutes(smx_base_runtime), + f"{item.get('switchName')} ({item.get('serial')})" + if item.get("serial") and item.get("switchName") != item.get("serial") + else str(item.get("switchName") or item.get("serial") or "Unknown"), + str(item.get("model") or "Unknown"), + f"{float(item.get('observedPoeAvgWatts') or 0):.1f} W", + f"{float(item.get('chassisEstimateWatts') or 0):.1f} W", + f"{float(item.get('baseModeledLoadWatts') or 0):.1f} W", + f"{float(item.get('sizingLoadWatts') or 0):.1f} W", + ((item.get("runtimeEstimates") or {}).get("BX1500M") or {}).get("runtimeLabel", "Over UPS rating"), + ((item.get("runtimeEstimates") or {}).get("SMX2200RMLV2UBase") or {}).get("runtimeLabel", "Over UPS rating"), target_label, - _format_money(_smx_stack_cost(target_external_count) if target_config is not None else None), + target_stack.get("estimatedCostLabel", "Pricing needed"), ] ) - avg_ups_load = sum(ups_loads) / len(ups_loads) if ups_loads else 0 - max_ups_load = max(ups_loads) if ups_loads else 0 + ups_summary = ups_power_plan.get("summary") or {} + ups_assumptions_summary = ups_power_plan.get("planningAssumptions") or {} + ups_target_hours = float(ups_assumptions_summary.get("targetRuntimeHours") or 10) + avg_ups_load = float(ups_summary.get("averageSizingLoadWatts") or 0) + max_ups_load = float(ups_summary.get("maxSizingLoadWatts") or 0) bx_max = bx_ref.get("max_watts") if isinstance(bx_ref, dict) else None smx_max = smx_ref.get("max_watts") if isinstance(smx_ref, dict) else None smx_unit = smx_ref.get("unit_cost") if isinstance(smx_ref, dict) else None @@ -3028,12 +3283,12 @@ def _smx_stack_cost(external_count: int) -> float | None:
    One UPS stack per listed switch load
    -
    Average Modeled Load
    +
    Average Sizing Load
    {avg_ups_load:.1f} W
    -
    Observed PoE + chassis estimate
    +
    Modeled load + 10% buffer
    -
    Largest Modeled Load
    +
    Largest Sizing Load
    {max_ups_load:.1f} W
    Used for closet-level sizing checks
    @@ -3046,7 +3301,8 @@ def _smx_stack_cost(external_count: int) -> float | None:
    Sizing Method
    - The estimate models each switch as Meraki-observed average PoE draw plus a conservative chassis/base load by switch family. BX1500M is treated as a small single-switch option + The estimate models each switch as Meraki-observed average PoE draw plus a conservative chassis/base load by switch family, then applies a 10% planning buffer before sizing UPS runtime. + The same data is saved beside the report as ups_switch_power_plan.json for future planning and review. BX1500M is treated as a small single-switch option {f"rated to {float(bx_max):g} W" if isinstance(bx_max, (int, float)) else ""}; SMX2200RMLV2U is treated as the rack/tower option {f"rated to {float(smx_max):g} W" if isinstance(smx_max, (int, float)) else ""} with {smx_ref.get("external_battery_sku", "external battery modules") if isinstance(smx_ref, dict) else "external battery modules"} for extended runtime. Runtime varies with battery age, temperature, load mix, and calibration, so these are planning estimates rather than procurement guarantees. @@ -3063,6 +3319,7 @@ def _smx_stack_cost(external_count: int) -> float | None: "Observed PoE Avg", "Chassis Est.", "Modeled Load", + "Sizing Load (+10%)", "BX1500M ETA", "SMX Base ETA", f"SMX Stack for {ups_target_hours:g}h", diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py index de79593..6405932 100644 --- a/tests/test_pipeline.py +++ b/tests/test_pipeline.py @@ -275,6 +275,8 @@ def fake_write_pdf(html_path, pdf_path): ]) == 0 assert (output / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() assert (output / "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf").exists() + assert (output / "Demo_Org_UPS_Switch_Power_Plan_Report_2026-05-02.json").exists() + assert (output / "ups_switch_power_plan.json").exists() assert (output / "Demo_Org_2026-05-02_2130_report.pdf").exists() assert (output / "report.pdf").exists() assert (output / "report_ap_spectrum.pdf").exists() @@ -306,8 +308,12 @@ def fake_write_pdf(html_path, pdf_path): latest_dir = reports / "latest" / "Demo_Org" assert (run_dir / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() assert (run_dir / "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf").exists() + assert (run_dir / "Demo_Org_UPS_Switch_Power_Plan_Report_2026-05-02.json").exists() + assert (run_dir / "ups_switch_power_plan.json").exists() assert (latest_dir / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() assert (latest_dir / "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf").exists() + assert (latest_dir / "Demo_Org_UPS_Switch_Power_Plan_Report_2026-05-02.json").exists() + assert (latest_dir / "ups_switch_power_plan.json").exists() assert (latest_dir / "report.pdf").exists() assert (latest_dir / "report_ap_spectrum.pdf").exists() assert not (run_dir / "report.pdf").exists() diff --git a/tests/test_report.py b/tests/test_report.py index 00b4e1d..cd68411 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -244,9 +244,50 @@ def test_ups_runtime_planning_uses_poe_and_apc_reference(self, tmp_path): assert "SMX2200RMLV2U" in html assert "Core-SW-1 (Q2SW-TEST-0001)" in html assert "97.5 W" in html + assert "107.3 W" in html + assert "10% planning buffer" in html + assert "ups_switch_power_plan.json" in html assert "1 UPS + 1 external battery module" in html assert "$3,487.04" in html + def test_ups_power_plan_json_payload_includes_buffered_switch_load(self, tmp_path): + from reporting.app import _load_ups_power_plan_from_org + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "poe_power_summary.json").write_text( + json.dumps( + { + "switch_poe_totals": [ + { + "serial": "Q2SW-TEST-0001", + "avgWatts": 42.5, + "powerUsageInWh": 1020, + } + ], + "port_poe_totals": [], + } + ), + encoding="utf-8", + ) + + payload = _load_ups_power_plan_from_org( + str(tmp_path), + "UPS Json Test", + datetime.fromisoformat("2026-05-05T12:00:00"), + ) + core = next(item for item in payload["switches"] if item["serial"] == "Q2SW-TEST-0001") + assert payload["planningAssumptions"]["loadBufferPercent"] == 10 + assert core["switchName"] == "Core-SW-1" + assert core["baseModeledLoadWatts"] == 97.5 + assert core["bufferWatts"] == 9.8 + assert core["sizingLoadWatts"] == 107.3 + assert core["runtimeEstimates"]["SMX2200RMLV2UTargetStack"]["externalBatteryCount"] == 1 + def test_expanded_hardware_catalog_renders_catalyst_poe_budget(self, tmp_path): from reporting.app import build_org_report From a15e091b18fc712cdec97f970332d62c717d629b Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 16:23:52 -0400 Subject: [PATCH 05/47] Clarify AP spectrum RF recommendations --- meraki_backup.py | 20 +++++ reporting/app.py | 2 + reporting/sections.py | 199 ++++++++++++++++++++++++++++++++++++++---- tests/test_report.py | 95 +++++++++++++++++++- 4 files changed, 297 insertions(+), 19 deletions(-) diff --git a/meraki_backup.py b/meraki_backup.py index a6df20c..d7f6948 100755 --- a/meraki_backup.py +++ b/meraki_backup.py @@ -1140,6 +1140,7 @@ def _cached_safe_get(filename: str, path_suffix: str, label: str, params=None) - wireless_mesh_statuses = {} clients_overview = {} wireless_rf_profiles = {} + wireless_rf_profile_assignments = {} wireless_settings = {} network_clients = {} wireless_clients = {} @@ -1157,6 +1158,24 @@ def _cached_safe_get(filename: str, path_suffix: str, label: str, params=None) - appliances_by_network.setdefault(net_id_for_appliance, []).append(appliance) if networks: log_line(log_f, "INFO", f"Collecting network-level telemetry for {len(networks)} network(s) in {org_name}") + _rf_assign_path = _pf("wireless_rf_profile_assignments.json") + if _cache_is_fresh(_rf_assign_path, max_age_h=max_age_h, force=force): + wireless_rf_profile_assignments = _load_json_file(_rf_assign_path) + log_line(log_f, "INFO", f"Wireless RF profile assignments (cached) for {org_name}") + else: + wireless_rf_profile_assignments, rf_assign_err = safe_paged_get( + f"/organizations/{org_id}/wireless/rfProfiles/assignments/byDevice", + api_key, + params={"productTypes[]": ["wireless"]}, + ) + if rf_assign_err: + level = "INFO" if is_capability_error(rf_assign_err) else "WARN" + log_line(log_f, level, f"Wireless RF profile assignments unavailable for org {org_id}: {rf_assign_err}") + wireless_rf_profile_assignments = {"error": rf_assign_err} + write_json( + _rf_assign_path, + wireless_rf_profile_assignments, + ) for idx, net in enumerate(networks, start=1): net_id = net.get("id") if not net_id: @@ -1427,6 +1446,7 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ write_json(_pf("wireless_mesh_statuses.json"), wireless_mesh_statuses) write_json(_pf("clients_overview.json"), clients_overview) write_json(_pf("wireless_rf_profiles.json"), wireless_rf_profiles) + write_json(_pf("wireless_rf_profile_assignments.json"), wireless_rf_profile_assignments) write_json(_pf("wireless_settings.json"), wireless_settings) write_json(_pf("network_clients.json"), network_clients) write_json(_pf("wireless_clients.json"), wireless_clients) diff --git a/reporting/app.py b/reporting/app.py index 4fc9b48..59cf4d9 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -789,6 +789,7 @@ def _flatten_client_records(raw: Any) -> List[Dict[str, Any]]: clients_overview_raw = load_json(os.path.join(org_dir, "clients_overview.json")) or {} licensing_data = load_json(os.path.join(org_dir, "licensing.json")) or {} rf_profiles = load_json(os.path.join(org_dir, "wireless_rf_profiles.json")) or {} + rf_profile_assignments = load_json(os.path.join(org_dir, "wireless_rf_profile_assignments.json")) or {} inventory_devices = load_json(os.path.join(org_dir, "inventory_devices.json")) or [] firmware_upgrades = load_json(os.path.join(org_dir, "firmware_upgrades.json")) or [] wireless_settings = load_json(os.path.join(org_dir, "wireless_settings.json")) or {} @@ -1503,6 +1504,7 @@ def _build_switch_summary_for_main_report() -> str: channel_util, wireless_stats, rf_profiles, + rf_profile_assignments, ) config_coverage_html = _build_config_coverage_section(org_dir, networks) budget_forecast_html = _build_budget_forecast_section(inventory_summary, pricing_payload) diff --git a/reporting/sections.py b/reporting/sections.py index 835c269..a9f7269 100644 --- a/reporting/sections.py +++ b/reporting/sections.py @@ -727,6 +727,7 @@ def _build_ap_spectrum_report( channel_util: Any, wireless_stats: Dict[str, Any], rf_profiles: Any, + rf_profile_assignments: Any = None, ) -> str: def _band_stats(row: Dict[str, Any]) -> Dict[str, Dict[str, float]]: bands: Dict[str, Dict[str, float]] = {} @@ -747,25 +748,115 @@ def _bubble(stats: Dict[str, float] | None) -> Tuple[str, str]: wifi = stats.get("wifi", 0.0) total = stats.get("total", 0.0) non_wifi = stats.get("non_wifi", 0.0) - if wifi >= 55 or total >= 75: + if non_wifi >= 50 and total >= 75: + return ("External RF saturation / investigate noise", "check-fail") + if non_wifi >= 25: + return ("High non-Wi-Fi noise / inspect source", "check-fail") + if wifi >= 55: return ("WAY TOO CLOSE / saturated RF bubble", "check-fail") - if wifi >= 40 or total >= 60: + if wifi >= 40: return ("Too close / co-channel pressure", "check-fail") - if wifi >= 25 or total >= 45 or non_wifi >= 15: + if non_wifi >= 15: + return ("Non-Wi-Fi noise / inspect source", "check-warning") + if total >= 60: + return ("Saturated airtime / mixed interference", "check-fail") + if wifi >= 25 or total >= 45: return ("Tight bubble / tune placement", "check-warning") if wifi >= 10 or total >= 25: return ("Within range / acceptable overlap", "check-pass") return ("Clean bubble / no overlap symptom", "check-pass") - def _power_context(net_id: str, band: str) -> str: + def _flatten_assignments(raw: Any) -> List[Dict[str, Any]]: + if isinstance(raw, dict) and isinstance(raw.get("items"), list): + return [item for item in raw["items"] if isinstance(item, dict)] + if not isinstance(raw, list): + return [] + rows: List[Dict[str, Any]] = [] + for item in raw: + if isinstance(item, dict) and isinstance(item.get("items"), list): + rows.extend(child for child in item["items"] if isinstance(child, dict)) + elif isinstance(item, dict): + rows.append(item) + return rows + + assignment_by_serial = { + str(item.get("serial")): item + for item in _flatten_assignments(rf_profile_assignments) + if item.get("serial") + } + + def _profile_settings_by_id(net_id: str) -> Dict[str, Dict[str, Any]]: + profiles = rf_profiles.get(net_id) if isinstance(rf_profiles, dict) else None + if not isinstance(profiles, list): + return {} + return { + str(profile.get("id")): profile + for profile in profiles + if isinstance(profile, dict) and profile.get("id") + } + + def _format_profile_power(profile: Dict[str, Any], band: str, exact: bool) -> str: band_map = { "2.4": "twoFourGhzSettings", "5": "fiveGhzSettings", "6": "sixGhzSettings", } + field = band_map.get(str(band)) + settings = profile.get(field) if field else None + if not isinstance(settings, dict): + return "RF profile power not available" + min_power = settings.get("minPower") + max_power = settings.get("maxPower") + min_text = f"{float(min_power):.0f} dBm min" if isinstance(min_power, (int, float)) else "min n/a" + max_text = f"{float(max_power):.0f} dBm max" if isinstance(max_power, (int, float)) else "max n/a" + cap_note = "" + if isinstance(max_power, (int, float)) and max_power <= 17: + cap_note = "; low power ceiling" + elif isinstance(max_power, (int, float)) and max_power <= 22: + cap_note = "; moderate power ceiling" + elif isinstance(max_power, (int, float)): + cap_note = "; high power ceiling" + bitrate = settings.get("minBitrate") + width = settings.get("channelWidth") + channels = settings.get("validAutoChannels") + details = [] + if isinstance(bitrate, (int, float)): + details.append(f"{float(bitrate):.0f} Mbps min bitrate") + if width: + details.append(f"{width} channel width") + if isinstance(channels, list) and channels: + details.append(f"{len(channels)} auto channel(s)") + name = profile.get("name") or "Unnamed RF profile" + source = "exact AP assignment" if exact else "default/profile fallback" + suffix = f"; {', '.join(details)}" if details else "" + return f"Current RF profile: {name} ({source}); {min_text}; {max_text}{cap_note}{suffix}" + + def _power_context(ap: Dict[str, Any], band: str) -> str: + net_id = ap["network_id"] + assignment = assignment_by_serial.get(ap["serial"]) + assigned_profile = assignment.get("rfProfile") if isinstance(assignment, dict) else None + assigned_profile_id = str(assigned_profile.get("id")) if isinstance(assigned_profile, dict) and assigned_profile.get("id") else "" + if assigned_profile_id: + profile = _profile_settings_by_id(net_id).get(assigned_profile_id) + if profile: + return _format_profile_power(profile, band, exact=True) + if isinstance(assigned_profile, dict): + return f"Current RF profile: {assigned_profile.get('name') or assigned_profile_id} (exact AP assignment); settings detail not in backup" + profiles = rf_profiles.get(net_id) if isinstance(rf_profiles, dict) else None if not isinstance(profiles, list) or not profiles: return "RF profile power not available" + default_profiles = [ + profile for profile in profiles + if isinstance(profile, dict) and (profile.get("isIndoorDefault") or profile.get("isOutdoorDefault")) + ] + if len(default_profiles) == 1: + return _format_profile_power(default_profiles[0], band, exact=False) + band_map = { + "2.4": "twoFourGhzSettings", + "5": "fiveGhzSettings", + "6": "sixGhzSettings", + } field = band_map.get(str(band)) values = [] names = [] @@ -796,7 +887,14 @@ def _power_context(net_id: str, band: str) -> str: profile_note = f" across {len(values)} RF profile(s)" if names: profile_note += f": {', '.join(names[:2])}{'…' if len(names) > 2 else ''}" - return f"{min_text}; {max_text}{cap_note}{profile_note}" + return f"RF profile range; {min_text}; {max_text}{cap_note}{profile_note}" + + def _profile_name(ap: Dict[str, Any]) -> str: + assignment = assignment_by_serial.get(ap["serial"]) + profile = assignment.get("rfProfile") if isinstance(assignment, dict) else None + if isinstance(profile, dict) and profile.get("name"): + return str(profile.get("name")) + return "Profile assignment not captured" def _client_stats(serial: str, net_id: str) -> Dict[str, int]: for item in wireless_stats.get(net_id, []) if isinstance(wireless_stats, dict) else []: @@ -901,6 +999,7 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: with_telemetry = [ap for ap in ap_records if ap["bands"]] high_pressure = [ap for ap in with_telemetry if "Too close" in ap["bubble"] or "WAY TOO CLOSE" in ap["bubble"]] tight_pressure = [ap for ap in with_telemetry if "Tight" in ap["bubble"]] + noise_pressure = [ap for ap in with_telemetry if "non-Wi-Fi" in ap["bubble"] or "External RF" in ap["bubble"]] no_telemetry = [ap for ap in ap_records if not ap["bands"]] site_counts: Dict[str, Dict[str, int]] = {} for ap in ap_records: @@ -926,6 +1025,14 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: def _candidate_rows(ap: Dict[str, Any]) -> str: band = ap["worst_band"] + stats = ap["worst_stats"] or {} + if stats.get("non_wifi", 0.0) >= 25 and stats.get("wifi", 0.0) < 40: + return ( + '' + "Worst symptom is non-Wi-Fi interference, not AP-to-AP overlap. Inspect local RF noise sources, " + "run Dashboard RF Spectrum or a site survey, and avoid removing APs solely from this signal." + "" + ) candidates = [] for other in ap_records: if other["serial"] == ap["serial"] or other["network_id"] != ap["network_id"] or not band: @@ -934,12 +1041,18 @@ def _candidate_rows(ap: Dict[str, Any]) -> str: if not stats: continue bubble, cls = _bubble(stats) - candidates.append((stats.get("wifi", 0.0) + stats.get("total", 0.0), other, stats, bubble, cls)) - candidates.sort(key=lambda item: (-item[0], item[1]["name"])) + candidates.append((stats.get("wifi", 0.0), stats.get("total", 0.0), other, stats, bubble, cls)) + candidates.sort(key=lambda item: (-item[0], -item[1], item[2]["name"])) if not candidates: return 'No same-site AP telemetry candidates were available for this affected band.' rows = [] - for _, other, stats, bubble, cls in candidates[:6]: + for _, __, other, stats, bubble, cls in candidates[:6]: + if "External RF" in bubble or "non-Wi-Fi" in bubble: + context = "Same-band noise observation; not AP overlap" + elif "Too close" in bubble or "WAY TOO CLOSE" in bubble: + context = "Likely overlap candidate" + else: + context = "Within same RF domain; verify on floor plan" rows.append( "" f"{_he(other['name'])}
    {_he(other['serial'])}" @@ -947,7 +1060,7 @@ def _candidate_rows(ap: Dict[str, Any]) -> str: f"{_he(band)} GHz" f"{stats['wifi']:.1f}% Wi-Fi / {stats['total']:.1f}% total" f"{_he(bubble)}" - f"{_he('Likely overlap candidate' if 'Too close' in bubble or 'WAY TOO CLOSE' in bubble else 'Within same RF domain; verify on floor plan')}" + f"{_he(context)}" "" ) return "".join(rows) @@ -965,14 +1078,26 @@ def _band_rows(ap: Dict[str, Any]) -> str: f"{stats['non_wifi']:.1f}%" f"{stats['total']:.1f}%" f"{_he(bubble)}" - f"{_he(_power_context(ap['network_id'], band))}" + f"{_he(_power_context(ap, band))}" "" ) return "".join(rows) def _recommendation(ap: Dict[str, Any]) -> str: stats = ap["worst_stats"] or {} - power = _power_context(ap["network_id"], ap["worst_band"]) + power = _power_context(ap, ap["worst_band"]) + if stats.get("non_wifi", 0.0) >= 50: + return ( + "Treat this as an external RF noise problem before changing AP density. " + "Use Meraki RF Spectrum or a field survey to identify local interferers, then retest channel utilization. " + "Do not remove or replace APs solely because this band is saturated by non-Wi-Fi energy. " + + power + ) + if stats.get("non_wifi", 0.0) >= 25: + return ( + "Prioritize finding the local non-Wi-Fi interference source. Replacement APs will still share the same noisy spectrum until that source is removed or avoided. " + + power + ) if "WAY TOO CLOSE" in ap["bubble"]: return ( "Treat this as a high-priority RF density problem. If the floor plan confirms " @@ -988,10 +1113,44 @@ def _recommendation(ap: Dict[str, Any]) -> str: + power ) if stats.get("non_wifi", 0.0) >= 15: - return "Inspect for non-Wi-Fi noise sources near this AP before replacing hardware. New APs will still share the same noisy spectrum." + return "Inspect for non-Wi-Fi noise sources near this AP before replacing hardware. New APs will still share the same noisy spectrum. " + power if not ap["bands"]: return "Re-run the backup after the AP is online and reporting channel utilization; no RF decision should be made from missing telemetry alone." - return "No immediate removal recommendation from current telemetry. Keep this AP in the upgrade plan unless the floor plan shows unnecessary overlap." + return "No immediate removal recommendation from current telemetry. Keep this AP in the upgrade plan unless the floor plan shows unnecessary overlap. " + power + + def _priority_action(ap: Dict[str, Any]) -> str: + stats = ap["worst_stats"] or {} + power = _power_context(ap, ap["worst_band"]) + if stats.get("non_wifi", 0.0) >= 25: + return "Find/remove RF noise source; retest before AP replacement. " + power + if "WAY TOO CLOSE" in ap["bubble"]: + return "Validate floor plan; remove, disable, or relocate one AP if physical overlap is confirmed. " + power + if "Too close" in ap["bubble"]: + return "Tune channel reuse and power; consider relocation/removal if profile is already constrained. " + power + if "Tight" in ap["bubble"]: + return "Tune profile/channel width before one-for-one refresh. " + power + return "Monitor; no immediate RF remediation from this telemetry." + + priority_rows = "".join( + "" + f"{_he(ap['site'])}" + f"{_he(ap['name'])}
    {_he(ap['serial'])}" + f"{_he((ap['worst_band'] + ' GHz') if ap['worst_band'] else 'No data')}" + f"{_he(ap['bubble'])}" + f"{_he(_profile_name(ap))}" + f"{_he(_priority_action(ap))}" + "" + for ap in sorted( + [*high_pressure, *noise_pressure, *tight_pressure], + key=lambda item: ( + {"check-fail": 0, "check-warning": 1, "check-pass": 2}.get(item["bubble_cls"], 3), + item["site"], + item["name"], + ), + )[:18] + ) + if not priority_rows: + priority_rows = 'No APs require immediate RF remediation from this telemetry window.' ap_pages = [] for ap in sorted( @@ -1018,14 +1177,14 @@ def _recommendation(ap: Dict[str, Any]) -> str:
    Assessment
    - This page estimates AP-to-AP overlap from Meraki channel-utilization data. It does not claim measured physical distance; "too close" means the AP's RF bubble is showing airtime contention that commonly occurs when nearby APs, channels, or power levels overlap too aggressively. + This page estimates RF pressure from Meraki channel-utilization data. It does not claim measured physical distance; AP-to-AP overlap is inferred from Wi-Fi airtime, while non-Wi-Fi utilization is treated as external RF noise that should be investigated separately.
    {_band_rows(ap)}
    BandWi-FiNon-Wi-FiTotalBubbleTransmit Power Context
    -

    Suspected Overlap Candidates

    +

    Same-Band Context / Overlap Candidates

    {_candidate_rows(ap)} @@ -1041,11 +1200,12 @@ def _recommendation(ap: Dict[str, Any]) -> str: return f"""

    AP Spectrum Availability & Interference Report

    -

    This dedicated RF report is designed for wireless refresh planning. It identifies APs whose spectrum is clean, APs that are merely within useful range of other radios, and APs whose airtime suggests tight or excessive overlap. Excessive overlap can reduce throughput, increase retries, slow roaming, and make a Wi-Fi 6/7 replacement look worse than it should if density and power are not corrected first.

    +

    This dedicated RF report is designed for wireless refresh planning. It identifies APs whose spectrum is clean, APs that are merely within useful range of other radios, APs whose Wi-Fi airtime suggests tight or excessive overlap, and APs whose non-Wi-Fi utilization points to external RF noise. Excessive overlap or unresolved noise can reduce throughput, increase retries, slow roaming, and make a Wi-Fi 6/7 replacement look worse than it should if density, power, and noise sources are not corrected first.

    AP Pages
    {len(ap_records)}
    One page per AP unit
    RF Telemetry
    {len(with_telemetry)}
    APs with channel utilization
    Too Close
    {len(high_pressure)}
    High co-channel pressure
    +
    RF Noise
    {len(noise_pressure)}
    Non-Wi-Fi interference
    Missing Data
    {len(no_telemetry)}
    Offline/dormant/no channel data
    Nearby AP CandidateModelBandCandidate AirtimeBubbleContext
    @@ -1055,9 +1215,14 @@ def _recommendation(ap: Dict[str, Any]) -> str:
    How To Read The Bubble Scale
    - Clean bubble means no current overlap symptom. Within range means normal overlap for roaming. Tight bubble means tune channel/power/placement. Too close and WAY TOO CLOSE mean the RF domain should be reviewed before adding or replacing APs; removal, relocation, lower power, or channel-width changes may be better than a one-for-one replacement. + Clean bubble means no current overlap symptom. Within range means normal overlap for roaming. Tight bubble means tune channel/power/placement. Too close and WAY TOO CLOSE mean AP-to-AP overlap should be reviewed before adding or replacing APs. RF Noise means non-Wi-Fi energy is saturating the band; find the external source before removing APs.
    +

    Recommended RF Work Queue

    +
    + + {priority_rows} +
    SiteAPBandSymptomRF ProfileGuidance
    {''.join(ap_pages)} """ diff --git a/tests/test_report.py b/tests/test_report.py index cd68411..0018e89 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -839,6 +839,7 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): { "N_test_001": [ { + "id": "rf-low", "name": "Classroom Low Power", "fiveGhzSettings": {"minPower": 8, "maxPower": 17}, } @@ -847,16 +848,106 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): ), encoding="utf-8", ) + (tmp_path / "wireless_rf_profile_assignments.json").write_text( + json.dumps( + [ + { + "items": [ + { + "network": {"id": "N_test_001"}, + "name": "AP-1F-01", + "serial": "Q2AP-TEST-0001", + "model": "MR46", + "rfProfile": { + "id": "rf-low", + "name": "Classroom Low Power", + "isIndoorDefault": False, + "isOutdoorDefault": False, + }, + } + ] + } + ] + ), + encoding="utf-8", + ) html = build_org_report(str(tmp_path), "AP Spectrum Test", report_kind="ap_spectrum") assert "AP Spectrum Availability & Interference Report" in html assert html.count("ap-unit-page") >= 2 assert "WAY TOO CLOSE / saturated RF bubble" in html - assert "Suspected Overlap Candidates" in html - assert "Classroom Low Power" in html + assert "Same-Band Context / Overlap Candidates" in html + assert "Current RF profile: Classroom Low Power (exact AP assignment)" in html assert "remove, disable, or relocate one AP" in html + assert "Recommended RF Work Queue" in html assert "Executive Summary" not in html + def test_ap_spectrum_distinguishes_external_noise_from_ap_overlap(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "channel_utilization_by_device.json").write_text( + json.dumps( + [ + { + "serial": "Q2AP-TEST-0001", + "network": {"id": "N_test_001"}, + "byBand": [ + { + "band": "5", + "wifi": {"percentage": 15}, + "nonWifi": {"percentage": 82}, + "total": {"percentage": 97}, + } + ], + } + ] + ), + encoding="utf-8", + ) + (tmp_path / "wireless_rf_profiles.json").write_text( + json.dumps( + { + "N_test_001": [ + { + "id": "rf-stage", + "name": "Auditorium", + "fiveGhzSettings": { + "minPower": 8, + "maxPower": 14, + "minBitrate": 24, + "channelWidth": "auto", + "validAutoChannels": [36, 40, 44, 48], + }, + } + ] + } + ), + encoding="utf-8", + ) + (tmp_path / "wireless_rf_profile_assignments.json").write_text( + json.dumps( + [ + { + "serial": "Q2AP-TEST-0001", + "rfProfile": {"id": "rf-stage", "name": "Auditorium"}, + } + ] + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "AP Spectrum Noise Test", report_kind="ap_spectrum") + assert "External RF saturation / investigate noise" in html + assert "Worst symptom is non-Wi-Fi interference, not AP-to-AP overlap" in html + assert "Do not remove or replace APs solely because this band is saturated by non-Wi-Fi energy" in html + assert "Current RF profile: Auditorium (exact AP assignment); 8 dBm min; 14 dBm max; low power ceiling" in html + def test_dated_complete_report_filename(self): from reporting.app import _dated_report_name filename = _dated_report_name( From 6b162732c9dda3f18c3d8ac038d38d60cf5c5b0d Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 16:35:34 -0400 Subject: [PATCH 06/47] Add AP model value and RF severity reporting --- reporting/app.py | 1 + reporting/html_shell.py | 29 ++ .../reference/meraki_hardware_catalog.json | 113 ++++++++ reporting/sections.py | 256 +++++++++++++++--- tests/test_report.py | 18 +- 5 files changed, 384 insertions(+), 33 deletions(-) diff --git a/reporting/app.py b/reporting/app.py index 59cf4d9..aa3d844 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -1505,6 +1505,7 @@ def _build_switch_summary_for_main_report() -> str: wireless_stats, rf_profiles, rf_profile_assignments, + hardware_catalog, ) config_coverage_html = _build_config_coverage_section(org_dir, networks) budget_forecast_html = _build_budget_forecast_section(inventory_summary, pricing_payload) diff --git a/reporting/html_shell.py b/reporting/html_shell.py index 576c098..68cd0a3 100644 --- a/reporting/html_shell.py +++ b/reporting/html_shell.py @@ -1237,10 +1237,39 @@ def build_html(doc_title: str, body: str) -> str: font-size: 22px; margin-bottom: 8px; }} + .ap-unit-page p {{ + font-size: 9.5px; + margin: 4px 0 8px; + }} + .ap-unit-page .kpi-row {{ + grid-template-columns: repeat(4, 1fr); + gap: 8px; + margin: 10px 0 12px; + }} + .ap-unit-page .kpi {{ + border-radius: 8px; + padding: 8px 8px; + }} .ap-unit-page .kpi-value {{ font-size: 15px; line-height: 1.25; }} + .ap-unit-page .summary-card {{ + border-radius: 8px; + margin: 10px 0 12px; + padding: 10px 14px; + }} + .ap-unit-page .summary-body {{ + font-size: 8.8px; + line-height: 1.35; + }} + .ap-unit-page h3 {{ + font-size: 12px; + margin: 10px 0 6px; + }} + .ap-unit-page table.data.dense {{ + font-size: 8px; + }} @media print {{ .ap-unit-page {{ min-height: 92vh; diff --git a/reporting/reference/meraki_hardware_catalog.json b/reporting/reference/meraki_hardware_catalog.json index 8a251be..4aff944 100644 --- a/reporting/reference/meraki_hardware_catalog.json +++ b/reporting/reference/meraki_hardware_catalog.json @@ -8,6 +8,18 @@ "Unknown models should render as unknown rather than estimated." ], "sources": [ + { + "title": "Cisco Wireless Access Point Wi-Fi Generation and Standards", + "url": "https://documentation.meraki.com/Wireless/Product_Information/Overviews_and_Datasheets/AP_Capabilities" + }, + { + "title": "CW9176I / CW9176D1 Datasheet", + "url": "https://documentation.meraki.com/Wireless/Product_Information/Overviews_and_Datasheets/CW9176I_%2F%2F_CW9176D1_Datasheet" + }, + { + "title": "MR46 Datasheet", + "url": "https://documentation.meraki.com/MR/Product_Information/MR_Overview_and_Specifications/MR46_Datasheet" + }, { "title": "MS225 Overview and Specifications", "url": "https://documentation.meraki.com/MS/MS_Overview_and_Specifications/MS225_Overview_and_Specifications" @@ -174,6 +186,107 @@ "poePorts": 48, "uplinkPorts": "Modular", "source": "Catalyst 9300-M Datasheet" + }, + "CW9176I": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 7", + "standard": "802.11be", + "sixGhzCapable": true, + "bands": ["2.4", "5", "6"], + "spatialStreams": 12, + "source": "Cisco AP Capabilities / CW9176I Datasheet" + }, + "CW9176D1": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 7", + "standard": "802.11be", + "sixGhzCapable": true, + "bands": ["2.4", "5", "6"], + "spatialStreams": 12, + "source": "Cisco AP Capabilities / CW9176I Datasheet" + }, + "CW9163E": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 6E", + "standard": "802.11ax", + "sixGhzCapable": true, + "bands": ["2.4", "5", "6"], + "source": "Cisco AP Capabilities" + }, + "MR57": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 6E", + "standard": "802.11ax", + "sixGhzCapable": true, + "bands": ["2.4", "5", "6"], + "source": "Cisco AP Capabilities" + }, + "MR86": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 6", + "standard": "802.11ax", + "sixGhzCapable": false, + "bands": ["2.4", "5"], + "spatialStreams": 8, + "source": "Cisco AP Capabilities / MR86 Datasheet" + }, + "MR46": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 6", + "standard": "802.11ax", + "sixGhzCapable": false, + "bands": ["2.4", "5"], + "spatialStreams": 8, + "source": "Cisco AP Capabilities / MR46 Datasheet" + }, + "MR44": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 6", + "standard": "802.11ax", + "sixGhzCapable": false, + "bands": ["2.4", "5"], + "spatialStreams": 4, + "source": "Cisco AP Capabilities" + }, + "MR42": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 5", + "standard": "802.11ac Wave 2", + "sixGhzCapable": false, + "bands": ["2.4", "5"], + "source": "Cisco AP Capabilities" + }, + "MR53": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 5", + "standard": "802.11ac Wave 2", + "sixGhzCapable": false, + "bands": ["2.4", "5"], + "source": "Cisco AP Capabilities" + }, + "MR53E": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 5", + "standard": "802.11ac Wave 2", + "sixGhzCapable": false, + "bands": ["2.4", "5"], + "source": "Cisco AP Capabilities" + }, + "MR34": { + "productType": "wireless", + "wifiGeneration": "Wi-Fi 5-era", + "standard": "802.11ac", + "sixGhzCapable": false, + "bands": ["2.4", "5"], + "source": "Meraki model-family reference" + }, + "MR66": { + "productType": "wireless", + "wifiGeneration": "Legacy", + "standard": "802.11n", + "sixGhzCapable": false, + "bands": ["2.4", "5"], + "source": "Meraki model-family reference" } } } diff --git a/reporting/sections.py b/reporting/sections.py index a9f7269..afec2ce 100644 --- a/reporting/sections.py +++ b/reporting/sections.py @@ -728,7 +728,14 @@ def _build_ap_spectrum_report( wireless_stats: Dict[str, Any], rf_profiles: Any, rf_profile_assignments: Any = None, + hardware_catalog: Optional[Dict[str, Any]] = None, ) -> str: + catalog_models = ( + hardware_catalog.get("models") + if isinstance(hardware_catalog, dict) and isinstance(hardware_catalog.get("models"), dict) + else {} + ) + def _band_stats(row: Dict[str, Any]) -> Dict[str, Dict[str, float]]: bands: Dict[str, Dict[str, float]] = {} for band in row.get("byBand") or []: @@ -766,6 +773,67 @@ def _bubble(stats: Dict[str, float] | None) -> Tuple[str, str]: return ("Within range / acceptable overlap", "check-pass") return ("Clean bubble / no overlap symptom", "check-pass") + def _severity(stats: Dict[str, float] | None) -> Dict[str, Any]: + if not stats: + return { + "rank": 1, + "label": "No telemetry", + "class": "check-warning", + "score": 0.0, + "action": "Bring AP online or collect fresh channel utilization before making RF decisions.", + } + wifi = stats.get("wifi", 0.0) + non_wifi = stats.get("non_wifi", 0.0) + total = stats.get("total", 0.0) + score = max(total, wifi * 1.15, non_wifi * 1.1) + if non_wifi >= 50 or total >= 90: + return { + "rank": 6, + "label": "Critical", + "class": "check-fail", + "score": score, + "action": "Resolve immediately. Run spectrum analysis, remove the RF noise source, or temporarily disable the affected band only if client impact is confirmed.", + } + if wifi >= 55 or total >= 75 or non_wifi >= 25: + return { + "rank": 5, + "label": "Severe", + "class": "check-fail", + "score": score, + "action": "Remediate before refresh. Fix AP density, channel reuse, or external RF noise before judging replacement hardware.", + } + if wifi >= 40 or total >= 60 or non_wifi >= 15: + return { + "rank": 4, + "label": "Major", + "class": "check-fail", + "score": score, + "action": "Prioritize RF tuning. Review profile, channels, power, and local noise sources.", + } + if wifi >= 25 or total >= 45: + return { + "rank": 3, + "label": "Moderate", + "class": "check-warning", + "score": score, + "action": "Tune during normal maintenance. Watch for dense-room or hallway overlap.", + } + if wifi >= 10 or total >= 25: + return { + "rank": 2, + "label": "Minor", + "class": "check-pass", + "score": score, + "action": "Acceptable overlap for roaming; monitor trend.", + } + return { + "rank": 0, + "label": "Clean", + "class": "check-pass", + "score": score, + "action": "No RF remediation indicated by this telemetry window.", + } + def _flatten_assignments(raw: Any) -> List[Dict[str, Any]]: if isinstance(raw, dict) and isinstance(raw.get("items"), list): return [item for item in raw["items"] if isinstance(item, dict)] @@ -795,6 +863,27 @@ def _profile_settings_by_id(net_id: str) -> Dict[str, Dict[str, Any]]: if isinstance(profile, dict) and profile.get("id") } + def _assigned_profile(ap: Dict[str, Any]) -> Tuple[Dict[str, Any] | None, bool, str]: + assignment = assignment_by_serial.get(ap["serial"]) + assigned_profile = assignment.get("rfProfile") if isinstance(assignment, dict) else None + assigned_profile_id = str(assigned_profile.get("id")) if isinstance(assigned_profile, dict) and assigned_profile.get("id") else "" + if assigned_profile_id: + profile = _profile_settings_by_id(ap["network_id"]).get(assigned_profile_id) + if profile: + return profile, True, str(profile.get("name") or assigned_profile.get("name") or assigned_profile_id) + if isinstance(assigned_profile, dict): + return assigned_profile, True, str(assigned_profile.get("name") or assigned_profile_id) + + profiles = rf_profiles.get(ap["network_id"]) if isinstance(rf_profiles, dict) else None + if isinstance(profiles, list): + defaults = [ + profile for profile in profiles + if isinstance(profile, dict) and (profile.get("isIndoorDefault") or profile.get("isOutdoorDefault")) + ] + if len(defaults) == 1: + return defaults[0], False, str(defaults[0].get("name") or "Default RF profile") + return None, False, "Profile assignment not captured" + def _format_profile_power(profile: Dict[str, Any], band: str, exact: bool) -> str: band_map = { "2.4": "twoFourGhzSettings", @@ -833,15 +922,12 @@ def _format_profile_power(profile: Dict[str, Any], band: str, exact: bool) -> st def _power_context(ap: Dict[str, Any], band: str) -> str: net_id = ap["network_id"] - assignment = assignment_by_serial.get(ap["serial"]) - assigned_profile = assignment.get("rfProfile") if isinstance(assignment, dict) else None - assigned_profile_id = str(assigned_profile.get("id")) if isinstance(assigned_profile, dict) and assigned_profile.get("id") else "" - if assigned_profile_id: - profile = _profile_settings_by_id(net_id).get(assigned_profile_id) - if profile: - return _format_profile_power(profile, band, exact=True) - if isinstance(assigned_profile, dict): - return f"Current RF profile: {assigned_profile.get('name') or assigned_profile_id} (exact AP assignment); settings detail not in backup" + profile, exact, profile_name = _assigned_profile(ap) + if profile: + if any(key.endswith("GhzSettings") for key in profile): + return _format_profile_power(profile, band, exact=exact) + if exact: + return f"Current RF profile: {profile_name} (exact AP assignment); settings detail not in backup" profiles = rf_profiles.get(net_id) if isinstance(rf_profiles, dict) else None if not isinstance(profiles, list) or not profiles: @@ -890,11 +976,103 @@ def _power_context(ap: Dict[str, Any], band: str) -> str: return f"RF profile range; {min_text}; {max_text}{cap_note}{profile_note}" def _profile_name(ap: Dict[str, Any]) -> str: - assignment = assignment_by_serial.get(ap["serial"]) - profile = assignment.get("rfProfile") if isinstance(assignment, dict) else None - if isinstance(profile, dict) and profile.get("name"): - return str(profile.get("name")) - return "Profile assignment not captured" + _, exact, name = _assigned_profile(ap) + return name if exact else f"{name} (fallback)" if name != "Profile assignment not captured" else name + + def _ap_capability(ap: Dict[str, Any]) -> Dict[str, Any]: + model = str(ap.get("model") or "") + ref = catalog_models.get(model) if isinstance(catalog_models, dict) else None + if isinstance(ref, dict) and ref.get("productType") == "wireless": + generation = str(ref.get("wifiGeneration") or "Unknown generation") + standard = str(ref.get("standard") or "unknown standard") + six_ghz = bool(ref.get("sixGhzCapable")) + streams = ref.get("spatialStreams") + raw_bands = [str(b) for b in ref.get("bands", []) if b] + band_text = f"{', '.join(raw_bands)} GHz" if raw_bands else ("2.4/5/6 GHz" if six_ghz else "2.4/5 GHz") + label = f"{generation} / {standard} / {band_text}" + if isinstance(streams, (int, float)): + label += f" / {int(streams)} streams" + return { + "known": True, + "generation": generation, + "standard": standard, + "sixGhzCapable": six_ghz, + "label": label, + "source": ref.get("source") or "Meraki hardware catalog", + } + if model.startswith("CW917"): + return {"known": True, "generation": "Wi-Fi 7", "standard": "802.11be", "sixGhzCapable": True, "label": "Wi-Fi 7 / 802.11be / 2.4, 5, 6 GHz", "source": "model-family inference"} + if model.startswith("CW916") or model == "MR57": + return {"known": True, "generation": "Wi-Fi 6E", "standard": "802.11ax", "sixGhzCapable": True, "label": "Wi-Fi 6E / 802.11ax / 2.4, 5, 6 GHz", "source": "model-family inference"} + if model in {"MR28", "MR36", "MR36H", "MR44", "MR45", "MR46", "MR46E", "MR55", "MR56", "MR76", "MR78", "MR86"}: + return {"known": True, "generation": "Wi-Fi 6", "standard": "802.11ax", "sixGhzCapable": False, "label": "Wi-Fi 6 / 802.11ax / 2.4, 5 GHz", "source": "model-family inference"} + if model in {"MR20", "MR30H", "MR33", "MR42", "MR42E", "MR52", "MR53", "MR70", "MR74", "MR84"}: + return {"known": True, "generation": "Wi-Fi 5", "standard": "802.11ac Wave 2", "sixGhzCapable": False, "label": "Wi-Fi 5 / 802.11ac Wave 2 / 2.4, 5 GHz", "source": "model-family inference"} + return {"known": False, "generation": "Unknown", "standard": "Unknown", "sixGhzCapable": False, "label": "Model capability not in AP catalog", "source": "unknown"} + + def _profile_band_context(ap: Dict[str, Any]) -> Dict[str, Any]: + profile, exact, name = _assigned_profile(ap) + if not isinstance(profile, dict): + return { + "name": name, + "exact": exact, + "enabledBands": [], + "ssidSixGhzCount": None, + "ssidCount": None, + "summary": "RF profile assignment/settings not captured", + } + band_settings = profile.get("apBandSettings") if isinstance(profile.get("apBandSettings"), dict) else {} + bands = band_settings.get("bands") if isinstance(band_settings.get("bands"), dict) else {} + enabled = [str(b) for b in bands.get("enabled", [])] if isinstance(bands.get("enabled"), list) else [] + per_ssid = profile.get("perSsidSettings") if isinstance(profile.get("perSsidSettings"), dict) else {} + ssid_count = 0 + ssid_6_count = 0 + for ssid in per_ssid.values(): + if not isinstance(ssid, dict): + continue + ssid_count += 1 + ssid_bands = ssid.get("bands") if isinstance(ssid.get("bands"), dict) else {} + ssid_enabled = ssid_bands.get("enabled") if isinstance(ssid_bands.get("enabled"), list) else [] + if "6" in [str(b) for b in ssid_enabled]: + ssid_6_count += 1 + source = "exact" if exact else "fallback" + enabled_text = ", ".join(enabled) + " GHz" if enabled else "band list unavailable" + ssid_text = "" + if ssid_count: + ssid_text = f"; {ssid_6_count}/{ssid_count} SSID profile(s) expose 6 GHz" + return { + "name": name, + "exact": exact, + "enabledBands": enabled, + "ssidSixGhzCount": ssid_6_count if ssid_count else None, + "ssidCount": ssid_count if ssid_count else None, + "summary": f"{name} ({source}); enabled AP bands: {enabled_text}{ssid_text}", + } + + def _value_assessment(ap: Dict[str, Any]) -> str: + cap = _ap_capability(ap) + profile_ctx = _profile_band_context(ap) + stats = ap.get("worst_stats") or {} + severity = _severity(stats) + points: List[str] = [] + if cap["sixGhzCapable"]: + enabled = set(profile_ctx["enabledBands"]) + ssid_6_count = profile_ctx["ssidSixGhzCount"] + if "6" not in enabled: + points.append("6 GHz capable AP, but this RF profile does not show 6 GHz enabled.") + elif ssid_6_count == 0 and profile_ctx["ssidCount"]: + points.append("6 GHz capable AP and profile allows 6 GHz, but SSID profile settings do not appear to expose 6 GHz.") + else: + points.append("6 GHz capable AP with profile support visible.") + if cap["generation"] in {"Wi-Fi 7", "Wi-Fi 6E", "Wi-Fi 6"} and severity["rank"] >= 4: + points.append(f"Current {severity['label'].lower()} interference means the organization may not feel the value of this {cap['generation']} AP until RF is remediated.") + if not cap["sixGhzCapable"] and cap["generation"] in {"Wi-Fi 5", "Wi-Fi 5-era", "Legacy", "Unknown"} and severity["rank"] >= 4: + points.append("Do not spend refresh money until RF noise/overlap is corrected; replacement hardware would inherit the same spectrum problem.") + if not points: + points.append("No obvious hardware value blocker from this telemetry window.") + if not profile_ctx["exact"]: + points.append("RF profile assignment is not exact in this backup; rerun data collection with RF profile assignments for stronger per-AP conclusions.") + return " ".join(points) def _client_stats(serial: str, net_id: str) -> Dict[str, int]: for item in wireless_stats.get(net_id, []) if isinstance(wireless_stats, dict) else []: @@ -936,6 +1114,7 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: ), ) bubble_label, bubble_cls = _bubble(worst_stats) + severity = _severity(worst_stats) clients = _client_stats(serial, net_id) ap_records.append( { @@ -950,6 +1129,7 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: "worst_stats": worst_stats, "bubble": bubble_label, "bubble_cls": bubble_cls, + "severity": severity, "clients": clients, } ) @@ -971,6 +1151,7 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: ), ) bubble_label, bubble_cls = _bubble(worst_stats) + severity = _severity(worst_stats) ap_records.append( { "site": net_data.get("name") or "Unassigned", @@ -984,6 +1165,7 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: "worst_stats": worst_stats, "bubble": bubble_label, "bubble_cls": bubble_cls, + "severity": severity, "clients": _client_stats(serial, net_id), } ) @@ -1121,36 +1303,41 @@ def _recommendation(ap: Dict[str, Any]) -> str: def _priority_action(ap: Dict[str, Any]) -> str: stats = ap["worst_stats"] or {} power = _power_context(ap, ap["worst_band"]) + value = _value_assessment(ap) if stats.get("non_wifi", 0.0) >= 25: - return "Find/remove RF noise source; retest before AP replacement. " + power + return "Find/remove RF noise source; retest before AP replacement. " + value + " " + power if "WAY TOO CLOSE" in ap["bubble"]: - return "Validate floor plan; remove, disable, or relocate one AP if physical overlap is confirmed. " + power + return "Validate floor plan; remove, disable, or relocate one AP if physical overlap is confirmed. " + value + " " + power if "Too close" in ap["bubble"]: - return "Tune channel reuse and power; consider relocation/removal if profile is already constrained. " + power + return "Tune channel reuse and power; consider relocation/removal if profile is already constrained. " + value + " " + power if "Tight" in ap["bubble"]: - return "Tune profile/channel width before one-for-one refresh. " + power + return "Tune profile/channel width before one-for-one refresh. " + value + " " + power return "Monitor; no immediate RF remediation from this telemetry." + severity_queue = sorted( + [ap for ap in with_telemetry if (ap.get("severity") or {}).get("rank", 0) >= 2], + key=lambda item: ( + -(item.get("severity") or {}).get("rank", 0), + -(item.get("severity") or {}).get("score", 0.0), + item["site"], + item["name"], + ), + ) + priority_rows = "".join( "" f"{_he(ap['site'])}" f"{_he(ap['name'])}
    {_he(ap['serial'])}" + f"{_he(ap['model'] or 'Unknown')}
    {_he(_ap_capability(ap)['label'])}" f"{_he((ap['worst_band'] + ' GHz') if ap['worst_band'] else 'No data')}" - f"{_he(ap['bubble'])}" + f"{_he((ap.get('severity') or {}).get('label', 'Unknown'))}
    {_he(ap['bubble'])}" f"{_he(_profile_name(ap))}" f"{_he(_priority_action(ap))}" "" - for ap in sorted( - [*high_pressure, *noise_pressure, *tight_pressure], - key=lambda item: ( - {"check-fail": 0, "check-warning": 1, "check-pass": 2}.get(item["bubble_cls"], 3), - item["site"], - item["name"], - ), - )[:18] + for ap in severity_queue[:24] ) if not priority_rows: - priority_rows = 'No APs require immediate RF remediation from this telemetry window.' + priority_rows = 'No APs require immediate RF remediation from this telemetry window.' ap_pages = [] for ap in sorted( @@ -1163,6 +1350,9 @@ def _priority_action(ap: Dict[str, Any]) -> str: ): stats = ap["worst_stats"] or {} clients = ap["clients"] + severity = ap.get("severity") or _severity(stats) + cap = _ap_capability(ap) + profile_ctx = _profile_band_context(ap) ap_pages.append( f"""
    @@ -1175,9 +1365,10 @@ def _priority_action(ap: Dict[str, Any]) -> str:
    Client Events
    {clients['assoc']} assoc
    {clients['auth']} auth / {clients['success']} success
    -
    Assessment
    +
    RF / Hardware Fit
    - This page estimates RF pressure from Meraki channel-utilization data. It does not claim measured physical distance; AP-to-AP overlap is inferred from Wi-Fi airtime, while non-Wi-Fi utilization is treated as external RF noise that should be investigated separately. + Severity: {_he(severity['label'])}. Model: {_he(cap['label'])} ({_he(str(cap['source']))}). RF profile: {_he(profile_ctx['summary'])}. Value check: {_he(_value_assessment(ap))} +
    RF pressure is estimated from Meraki channel-utilization telemetry. Wi-Fi airtime is treated as overlap pressure; non-Wi-Fi utilization is treated as external RF noise.
    @@ -1206,6 +1397,7 @@ def _priority_action(ap: Dict[str, Any]) -> str:
    RF Telemetry
    {len(with_telemetry)}
    APs with channel utilization
    Too Close
    {len(high_pressure)}
    High co-channel pressure
    RF Noise
    {len(noise_pressure)}
    Non-Wi-Fi interference
    +
    Severe+
    {sum(1 for ap in with_telemetry if (ap.get('severity') or {}).get('rank', 0) >= 5)}
    Fix before refresh decisions
    Missing Data
    {len(no_telemetry)}
    Offline/dormant/no channel data
    @@ -1218,9 +1410,9 @@ def _priority_action(ap: Dict[str, Any]) -> str: Clean bubble means no current overlap symptom. Within range means normal overlap for roaming. Tight bubble means tune channel/power/placement. Too close and WAY TOO CLOSE mean AP-to-AP overlap should be reviewed before adding or replacing APs. RF Noise means non-Wi-Fi energy is saturating the band; find the external source before removing APs. -

    Recommended RF Work Queue

    +

    Interference Severity Queue

    - + {priority_rows}
    SiteAPBandSymptomRF ProfileGuidance
    SiteAPModel CapabilityBandSeverity / SymptomRF ProfileGuidance
    diff --git a/tests/test_report.py b/tests/test_report.py index 0018e89..9d800cb 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -879,7 +879,10 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): assert "Same-Band Context / Overlap Candidates" in html assert "Current RF profile: Classroom Low Power (exact AP assignment)" in html assert "remove, disable, or relocate one AP" in html - assert "Recommended RF Work Queue" in html + assert "Interference Severity Queue" in html + assert "RF / Hardware Fit" in html + assert "Wi-Fi 6 / 802.11ax / 2.4, 5 GHz" in html + assert "Current severe interference means the organization may not feel the value of this Wi-Fi 6 AP until RF is remediated" in html assert "Executive Summary" not in html def test_ap_spectrum_distinguishes_external_noise_from_ap_overlap(self, tmp_path): @@ -891,6 +894,12 @@ def test_ap_spectrum_distinguishes_external_noise_from_ap_overlap(self, tmp_path if os.path.isfile(src): shutil.copy(src, dst) + devices = json.loads((tmp_path / "devices_availabilities.json").read_text(encoding="utf-8")) + for device in devices: + if device.get("serial") == "Q2AP-TEST-0001": + device["model"] = "CW9176I" + (tmp_path / "devices_availabilities.json").write_text(json.dumps(devices), encoding="utf-8") + (tmp_path / "channel_utilization_by_device.json").write_text( json.dumps( [ @@ -917,6 +926,9 @@ def test_ap_spectrum_distinguishes_external_noise_from_ap_overlap(self, tmp_path { "id": "rf-stage", "name": "Auditorium", + "apBandSettings": { + "bands": {"enabled": ["2.4", "5"]}, + }, "fiveGhzSettings": { "minPower": 8, "maxPower": 14, @@ -944,6 +956,10 @@ def test_ap_spectrum_distinguishes_external_noise_from_ap_overlap(self, tmp_path html = build_org_report(str(tmp_path), "AP Spectrum Noise Test", report_kind="ap_spectrum") assert "External RF saturation / investigate noise" in html + assert "Interference Severity Queue" in html + assert "Critical" in html + assert "Wi-Fi 7 / 802.11be / 2.4, 5, 6 GHz" in html + assert "6 GHz capable AP, but this RF profile does not show 6 GHz enabled" in html assert "Worst symptom is non-Wi-Fi interference, not AP-to-AP overlap" in html assert "Do not remove or replace APs solely because this band is saturated by non-Wi-Fi energy" in html assert "Current RF profile: Auditorium (exact AP assignment); 8 dBm min; 14 dBm max; low power ceiling" in html From 82dcd649be4b7c5a91e2f6f1adf54afef7375741 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 16:54:19 -0400 Subject: [PATCH 07/47] Harden RF profile assignment collection --- meraki_backup.py | 58 ++++++++++++++++++++++++++++++++++++++------ tests/test_backup.py | 22 +++++++++++++++++ 2 files changed, 72 insertions(+), 8 deletions(-) diff --git a/meraki_backup.py b/meraki_backup.py index d7f6948..101da26 100755 --- a/meraki_backup.py +++ b/meraki_backup.py @@ -146,6 +146,20 @@ def _cache_is_fresh(path: str, max_age_h: float = 12.0, force: bool = False) -> return False +def _payload_has_error(payload: Any) -> bool: + return isinstance(payload, dict) and bool(payload.get("error")) + + +def _cache_is_fresh_success(path: str, max_age_h: float = 12.0, force: bool = False) -> bool: + """Return True only for fresh JSON that is not an error sentinel from a prior API run.""" + if not _cache_is_fresh(path, max_age_h=max_age_h, force=force): + return False + try: + return not _payload_has_error(_load_json_file(path)) + except Exception: + return False + + def _load_json_file(path: str) -> Any: with open(path, encoding="utf-8") as f: return json.load(f) @@ -190,6 +204,21 @@ def _granular_cache_fresh( ) +def _granular_cache_fresh_success( + org_dir: str, + category: str, + item_id: str, + filename: str, + max_age_h: float, + force: bool = False, +) -> bool: + return _cache_is_fresh_success( + _artifact_path(org_dir, category, item_id, filename), + max_age_h=max_age_h, + force=force, + ) + + def load_devices_by_type(inventory: List[Dict[str, Any]]) -> Dict[str, List[Dict[str, Any]]]: by_type: Dict[str, List[Dict[str, Any]]] = {} for d in inventory: @@ -1159,14 +1188,18 @@ def _cached_safe_get(filename: str, path_suffix: str, label: str, params=None) - if networks: log_line(log_f, "INFO", f"Collecting network-level telemetry for {len(networks)} network(s) in {org_name}") _rf_assign_path = _pf("wireless_rf_profile_assignments.json") - if _cache_is_fresh(_rf_assign_path, max_age_h=max_age_h, force=force): + network_id_filter = [n.get("id") for n in networks if n.get("id")] + if _cache_is_fresh_success(_rf_assign_path, max_age_h=max_age_h, force=force): wireless_rf_profile_assignments = _load_json_file(_rf_assign_path) log_line(log_f, "INFO", f"Wireless RF profile assignments (cached) for {org_name}") else: wireless_rf_profile_assignments, rf_assign_err = safe_paged_get( f"/organizations/{org_id}/wireless/rfProfiles/assignments/byDevice", api_key, - params={"productTypes[]": ["wireless"]}, + params={ + "productTypes": ["wireless"], + "networkIds": network_id_filter, + }, ) if rf_assign_err: level = "INFO" if is_capability_error(rf_assign_err) else "WARN" @@ -1221,11 +1254,20 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ ), "Clients overview failed", ) - wireless_rf_profiles[net_id] = _load_or_fetch_net( - "wireless_rf_profiles.json", - lambda: safe_paged_get(f"/networks/{net_id}/wireless/rfProfiles", api_key), - "Wireless rfProfiles failed", - ) + if _granular_cache_fresh_success(org_dir, "networks", net_id, "wireless_rf_profiles.json", max_age_h, force): + wireless_rf_profiles[net_id] = _read_granular_json(org_dir, "networks", net_id, "wireless_rf_profiles.json") + else: + wireless_rf_profiles[net_id], rf_profiles_err = safe_paged_get(f"/networks/{net_id}/wireless/rfProfiles", api_key) + if rf_profiles_err: + wireless_rf_profiles[net_id] = {"error": rf_profiles_err} + log_line(log_f, "WARN", f"Wireless rfProfiles failed for network {net_id}: {rf_profiles_err}") + _write_granular_json( + org_dir, + "networks", + net_id, + "wireless_rf_profiles.json", + wireless_rf_profiles[net_id], + ) wireless_settings[net_id] = _load_or_fetch_net( "wireless_settings.json", lambda: safe_get_one(f"/networks/{net_id}/wireless/settings", api_key), @@ -1481,7 +1523,7 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ f"/organizations/{org_id}/wireless/devices/channelUtilization/byDevice", api_key, params={ - "networkIds[]": [n.get("id") for n in networks if n.get("id")], + "networkIds": [n.get("id") for n in networks if n.get("id")], "timespan": 86400, }, ) diff --git a/tests/test_backup.py b/tests/test_backup.py index 78b758f..5f65efa 100644 --- a/tests/test_backup.py +++ b/tests/test_backup.py @@ -50,6 +50,17 @@ def test_within_max_age_returns_true(self, tmp_path): os.utime(str(p), (recent_time, recent_time)) assert mb._cache_is_fresh(str(p), max_age_h=12) is True + def test_fresh_error_payload_is_not_success_cache(self, tmp_path): + p = tmp_path / "rf_assignments.json" + p.write_text(json.dumps({"error": "temporary API failure"})) + assert mb._cache_is_fresh(str(p), max_age_h=12) is True + assert mb._cache_is_fresh_success(str(p), max_age_h=12) is False + + def test_fresh_success_payload_is_success_cache(self, tmp_path): + p = tmp_path / "rf_assignments.json" + p.write_text(json.dumps([{"serial": "Q2XX-TEST-0001"}])) + assert mb._cache_is_fresh_success(str(p), max_age_h=12) is True + # ── write_json / _load_json_file ────────────────────────────────────────────── @@ -161,6 +172,17 @@ def test_build_url_appends_query_params(self): assert "perPage=5" in url assert "foo=bar" in url + def test_build_url_repeats_array_params_without_bracket_suffix(self): + url = mc.build_url( + "/organizations/1/wireless/rfProfiles/assignments/byDevice", + {"productTypes": ["wireless"], "networkIds": ["N_1", "N_2"]}, + ) + assert "productTypes=wireless" in url + assert "networkIds=N_1" in url + assert "networkIds=N_2" in url + assert "productTypes%5B%5D" not in url + assert "networkIds%5B%5D" not in url + def test_shared_paged_get_honors_retry_after(self, monkeypatch): sleeps = [] calls = {"count": 0} From 56a2319f03676c4579676d1cb62e510a7922547a Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 17:15:26 -0400 Subject: [PATCH 08/47] Add Meraki wireless standards references --- meraki_backup.py | 12 ++ reporting/app.py | 17 ++ .../reference/wireless_design_reference.json | 104 ++++++++++++ reporting/sections.py | 154 +++++++++++++++++- tests/test_report.py | 23 +++ 5 files changed, 307 insertions(+), 3 deletions(-) create mode 100644 reporting/reference/wireless_design_reference.json diff --git a/meraki_backup.py b/meraki_backup.py index 101da26..773625e 100755 --- a/meraki_backup.py +++ b/meraki_backup.py @@ -1174,6 +1174,7 @@ def _cached_safe_get(filename: str, path_suffix: str, label: str, params=None) - network_clients = {} wireless_clients = {} wireless_ssids = {} + wireless_event_log = {} alerts_history = {} appliance_baseline = {} appliance_uplinks_usage = {} @@ -1303,6 +1304,16 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ "Alerts history unavailable", capability_aware=True, ) + wireless_event_log[net_id] = _load_or_fetch_net( + "wireless_event_log.json", + lambda: safe_get_one( + f"/networks/{net_id}/events", + api_key, + params={"productType": "wireless", "perPage": PER_PAGE_EVENTS}, + ), + "Wireless event log unavailable", + capability_aware=True, + ) if "appliance" in (net.get("productTypes") or []): net_baseline: Dict[str, Any] = {} @@ -1493,6 +1504,7 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ write_json(_pf("network_clients.json"), network_clients) write_json(_pf("wireless_clients.json"), wireless_clients) write_json(_pf("wireless_ssids.json"), wireless_ssids) + write_json(_pf("wireless_event_log.json"), wireless_event_log) write_json(_pf("alerts_history.json"), alerts_history) write_json(_pf("appliance_uplinks_usage.json"), appliance_uplinks_usage) write_json(_pf("appliance_vlans.json"), appliance_vlans) diff --git a/reporting/app.py b/reporting/app.py index aa3d844..277d9da 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -60,6 +60,11 @@ "reference", "ups_runtime_reference.json", ) +WIRELESS_DESIGN_REFERENCE_PATH = os.path.join( + os.path.dirname(os.path.abspath(__file__)), + "reference", + "wireless_design_reference.json", +) UPS_LOAD_BUFFER_RATIO = 0.10 @@ -117,6 +122,14 @@ def _load_ups_payload(org_dir: str) -> Dict[str, Any]: ) +def _load_wireless_design_reference(org_dir: str) -> Dict[str, Any]: + return ( + load_json(os.path.join(org_dir, "wireless_design_reference.json")) + or load_json(WIRELESS_DESIGN_REFERENCE_PATH) + or {} + ) + + def _format_money(value: int | float | None) -> str: if not isinstance(value, (int, float)) or isinstance(value, bool): return "Pricing needed" @@ -795,6 +808,7 @@ def _flatten_client_records(raw: Any) -> List[Dict[str, Any]]: wireless_settings = load_json(os.path.join(org_dir, "wireless_settings.json")) or {} wireless_ssids = load_json(os.path.join(org_dir, "wireless_ssids.json")) or {} alerts_history = load_json(os.path.join(org_dir, "alerts_history.json")) or {} + wireless_event_log = load_json(os.path.join(org_dir, "wireless_event_log.json")) or {} wireless_mesh_statuses = load_json(os.path.join(org_dir, "wireless_mesh_statuses.json")) or {} appliance_vlans = load_json(os.path.join(org_dir, "appliance_vlans.json")) or {} appliance_dhcp_subnets = load_json(os.path.join(org_dir, "appliance_dhcp_subnets.json")) or {} @@ -802,6 +816,7 @@ def _flatten_client_records(raw: Any) -> List[Dict[str, Any]]: pricing_payload = _load_pricing_payload(org_dir) hardware_catalog = _load_hardware_catalog(org_dir) ups_payload = _load_ups_payload(org_dir) + wireless_design_reference = _load_wireless_design_reference(org_dir) # switch_port_configs / statuses are {serial: [port, …]} dicts — flatten, # injecting switchSerial so downstream code can reference the parent switch. @@ -1506,6 +1521,8 @@ def _build_switch_summary_for_main_report() -> str: rf_profiles, rf_profile_assignments, hardware_catalog, + wireless_design_reference, + wireless_event_log, ) config_coverage_html = _build_config_coverage_section(org_dir, networks) budget_forecast_html = _build_budget_forecast_section(inventory_summary, pricing_payload) diff --git a/reporting/reference/wireless_design_reference.json b/reporting/reference/wireless_design_reference.json new file mode 100644 index 0000000..f7b1106 --- /dev/null +++ b/reporting/reference/wireless_design_reference.json @@ -0,0 +1,104 @@ +{ + "meta": { + "name": "Meraki wireless design and RF operations reference", + "version": "2026-05-05", + "notes": [ + "Official Cisco Meraki documentation links used to support AP Spectrum recommendations.", + "Guidance is used as a standards basis for report commentary; final AP placement decisions still require floor plan review and site survey validation." + ] + }, + "sources": [ + { + "id": "enterprise-rf-design", + "title": "Meraki Wireless for Enterprise Best Practices - RF Design", + "url": "https://documentation.meraki.com/Platform_Management/Dashboard_Administration/Design_and_Configure/Architectures_and_Best_Practices/Meraki_Wireless_for_Enterprise_Best_Practices/Meraki_Wireless_for_Enterprise_Best_Practices_-_RF_Design", + "appliesTo": ["placement", "density", "rf-profile", "channel-width", "power"] + }, + { + "id": "high-density", + "title": "High Density Wi-Fi Deployments", + "url": "https://documentation.meraki.com/Platform_Management/Dashboard_Administration/Design_and_Configure/Architectures_and_Best_Practices/Cisco_Meraki_Best_Practice_Design/Best_Practice_Design_-_MR_Wireless/High_Density_Wi-Fi_Deployments", + "appliesTo": ["placement", "density", "channel-width", "data-rate", "client-balancing"] + }, + { + "id": "auto-rf", + "title": "Auto RF: Wi-Fi Channel and Power Management", + "url": "https://documentation.meraki.com/MR/Operate_and_Maintain/Monitoring_and_Reporting/Auto_RF%3A__Wi-Fi_Channel_and_Power_Management", + "appliesTo": ["auto-rf", "channel", "power", "interference", "logs"] + }, + { + "id": "rf-spectrum", + "title": "RF Spectrum Page Overview", + "url": "https://documentation.meraki.com/Wireless/Operate_and_Maintain/User_Guides/Monitoring_and_Reporting/RF_Spectrum_Page_Overview", + "appliesTo": ["channel-utilization", "interference", "noise", "validation"] + }, + { + "id": "health-ap-details", + "title": "Meraki Health - MR Access Point Details", + "url": "https://documentation.meraki.com/Platform_Management/Dashboard_Administration/Operate_and_Maintain/Monitoring_and_Reporting/Meraki_Health_Overview/Meraki_Health_-_MR_Access_Point_Details", + "appliesTo": ["wireless-health", "channel-utilization", "latency", "logs"] + }, + { + "id": "location-deployment", + "title": "Location Deployment Guidelines", + "url": "https://documentation.meraki.com/Wireless/Design_and_Configure/Deployment_Guides/Location_Deployment_Guidelines", + "appliesTo": ["placement", "floor-plan", "site-survey", "mounting"] + }, + { + "id": "afc", + "title": "Automatic Frequency Coordination", + "url": "https://documentation.meraki.com/Wireless/Design_and_Configure/Deployment_Guides/Automatic_Frequency_Coordination", + "appliesTo": ["6ghz", "standard-power", "mounting-height", "location"] + }, + { + "id": "api-events", + "title": "Meraki Dashboard API - Get Network Events", + "url": "https://developer.cisco.com/meraki/api-v1/get-network-events/", + "appliesTo": ["event-log", "wireless-logs", "api-backup"] + } + ], + "rules": [ + { + "id": "utilization-50-plus", + "label": "Validate high channel utilization", + "basis": "Meraki RF Spectrum documentation states that utilization above 50% is likely to create performance issues, with severity increasing as utilization rises.", + "sourceIds": ["rf-spectrum", "health-ap-details"] + }, + { + "id": "non-wifi-noise", + "label": "Separate non-Wi-Fi interference from AP overlap", + "basis": "Meraki Wireless Health and RF Spectrum separate 802.11 utilization from non-802.11 interference. Non-Wi-Fi saturation should be investigated before removing APs.", + "sourceIds": ["rf-spectrum", "health-ap-details", "auto-rf"] + }, + { + "id": "high-density-channel-width", + "label": "Use narrower channels in dense environments", + "basis": "Meraki high-density guidance recommends 20 MHz 5 GHz channels where density and channel reuse matter.", + "sourceIds": ["high-density", "enterprise-rf-design"] + }, + { + "id": "auto-rf-domain", + "label": "Keep RF neighbors in a common RF domain", + "basis": "Meraki enterprise architecture guidance notes Auto RF works on a Meraki Network basis, so APs that are RF neighbors should share the same Meraki Network when practical.", + "sourceIds": ["auto-rf", "enterprise-rf-design"] + }, + { + "id": "site-survey", + "label": "Validate physical placement with floor plan or survey", + "basis": "Meraki deployment guidance recommends validating AP coverage and placement with a site survey/floor plan instead of relying only on dashboard telemetry.", + "sourceIds": ["location-deployment", "high-density"] + }, + { + "id": "6ghz-afc", + "label": "Validate 6 GHz enablement and AFC requirements", + "basis": "Meraki AFC documentation describes RF profile enablement, AP height, and location requirements for 6 GHz Standard Power operation.", + "sourceIds": ["afc", "auto-rf"] + }, + { + "id": "event-log-correlation", + "label": "Correlate RF findings with wireless event logs", + "basis": "The Meraki event log API can return wireless events by network/AP; these events should be used as supporting context rather than sole RF evidence.", + "sourceIds": ["api-events", "health-ap-details"] + } + ] +} diff --git a/reporting/sections.py b/reporting/sections.py index afec2ce..a6e1dae 100644 --- a/reporting/sections.py +++ b/reporting/sections.py @@ -729,12 +729,61 @@ def _build_ap_spectrum_report( rf_profiles: Any, rf_profile_assignments: Any = None, hardware_catalog: Optional[Dict[str, Any]] = None, + wireless_design_reference: Optional[Dict[str, Any]] = None, + wireless_event_log: Any = None, ) -> str: catalog_models = ( hardware_catalog.get("models") if isinstance(hardware_catalog, dict) and isinstance(hardware_catalog.get("models"), dict) else {} ) + design_sources = { + str(src.get("id")): src + for src in (wireless_design_reference or {}).get("sources", []) + if isinstance(src, dict) and src.get("id") and src.get("url") + } + design_rules = { + str(rule.get("id")): rule + for rule in (wireless_design_reference or {}).get("rules", []) + if isinstance(rule, dict) and rule.get("id") + } + + def _source_links(source_ids: List[str]) -> str: + links = [] + for source_id in source_ids: + source = design_sources.get(source_id) + if not source: + continue + links.append( + f'{_he(str(source.get("title") or source_id))}' + ) + return "; ".join(links) if links else "Local RF heuristics; no external reference mapped." + + def _rule_source_ids(rule_ids: List[str]) -> List[str]: + source_ids: List[str] = [] + for rule_id in rule_ids: + rule = design_rules.get(rule_id) or {} + for source_id in rule.get("sourceIds", []): + if source_id not in source_ids: + source_ids.append(str(source_id)) + return source_ids + + def _rules_table(rule_ids: List[str]) -> str: + rows = [] + for rule_id in rule_ids: + rule = design_rules.get(rule_id) + if not rule: + continue + rows.append( + "" + f"{_he(str(rule.get('label') or rule_id))}" + f"{_he(str(rule.get('basis') or ''))}" + f"{_source_links([str(src) for src in rule.get('sourceIds', [])])}" + "" + ) + if not rows: + return 'No official Meraki wireless reference catalog was loaded.' + return "".join(rows) def _band_stats(row: Dict[str, Any]) -> Dict[str, Dict[str, float]]: bands: Dict[str, Dict[str, float]] = {} @@ -853,6 +902,73 @@ def _flatten_assignments(raw: Any) -> List[Dict[str, Any]]: if item.get("serial") } + def _flatten_wireless_events(raw: Any) -> List[Dict[str, Any]]: + events: List[Dict[str, Any]] = [] + if isinstance(raw, dict): + for payload in raw.values(): + if isinstance(payload, dict) and isinstance(payload.get("events"), list): + events.extend(event for event in payload["events"] if isinstance(event, dict)) + elif isinstance(payload, list): + for item in payload: + if isinstance(item, dict) and isinstance(item.get("events"), list): + events.extend(event for event in item["events"] if isinstance(event, dict)) + elif isinstance(item, dict): + events.append(item) + elif isinstance(raw, list): + for item in raw: + if isinstance(item, dict) and isinstance(item.get("events"), list): + events.extend(event for event in item["events"] if isinstance(event, dict)) + elif isinstance(item, dict): + events.append(item) + return events + + event_issue_fragments = ( + "fail", + "failed", + "failure", + "denied", + "deauth", + "disassoc", + "radar", + "dfs", + "channel change", + "interference", + "noise", + "poor", + ) + wireless_events = _flatten_wireless_events(wireless_event_log) + events_by_serial: Dict[str, List[Dict[str, Any]]] = {} + for event in wireless_events: + serial = str(event.get("deviceSerial") or "") + if serial: + events_by_serial.setdefault(serial, []).append(event) + + def _event_context(ap: Dict[str, Any]) -> Dict[str, Any]: + events = events_by_serial.get(ap["serial"], []) + issue_events = [] + for event in events: + text = " ".join( + str(event.get(key) or "") + for key in ("type", "description", "category") + ).lower() + if any(fragment in text for fragment in event_issue_fragments): + issue_events.append(event) + recent_types: Dict[str, int] = {} + for event in issue_events[:25]: + event_type = str(event.get("type") or event.get("description") or "wireless event") + recent_types[event_type] = recent_types.get(event_type, 0) + 1 + if issue_events: + top = ", ".join(f"{count} {event_type}" for event_type, count in list(recent_types.items())[:3]) + summary = f"{len(issue_events)} related wireless event(s) in the captured log: {top}." + cls = "check-warning" + elif events: + summary = f"{len(events)} wireless event(s) captured; no obvious failure/interference keywords were found." + cls = "check-pass" + else: + summary = "No AP-specific wireless event log entries were captured for this AP." + cls = "check-warning" + return {"events": events, "issues": issue_events, "summary": summary, "class": cls} + def _profile_settings_by_id(net_id: str) -> Dict[str, Dict[str, Any]]: profiles = rf_profiles.get(net_id) if isinstance(rf_profiles, dict) else None if not isinstance(profiles, list): @@ -1116,6 +1232,7 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: bubble_label, bubble_cls = _bubble(worst_stats) severity = _severity(worst_stats) clients = _client_stats(serial, net_id) + log_context = _event_context({"serial": serial}) ap_records.append( { "site": net_data.get("name") or "Unassigned", @@ -1131,6 +1248,7 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: "bubble_cls": bubble_cls, "severity": severity, "clients": clients, + "log_context": log_context, } ) @@ -1152,6 +1270,7 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: ) bubble_label, bubble_cls = _bubble(worst_stats) severity = _severity(worst_stats) + log_context = _event_context({"serial": serial}) ap_records.append( { "site": net_data.get("name") or "Unassigned", @@ -1167,6 +1286,7 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: "bubble_cls": bubble_cls, "severity": severity, "clients": _client_stats(serial, net_id), + "log_context": log_context, } ) @@ -1300,6 +1420,22 @@ def _recommendation(ap: Dict[str, Any]) -> str: return "Re-run the backup after the AP is online and reporting channel utilization; no RF decision should be made from missing telemetry alone." return "No immediate removal recommendation from current telemetry. Keep this AP in the upgrade plan unless the floor plan shows unnecessary overlap. " + power + def _standards_for_ap(ap: Dict[str, Any]) -> List[str]: + stats = ap["worst_stats"] or {} + rule_ids = ["utilization-50-plus", "event-log-correlation"] + if stats.get("non_wifi", 0.0) >= 15: + rule_ids.append("non-wifi-noise") + if "WAY TOO CLOSE" in ap["bubble"] or "Too close" in ap["bubble"] or "Tight" in ap["bubble"]: + rule_ids.extend(["site-survey", "high-density-channel-width", "auto-rf-domain"]) + cap = _ap_capability(ap) + if cap.get("sixGhzCapable"): + rule_ids.append("6ghz-afc") + result: List[str] = [] + for rule_id in rule_ids: + if rule_id not in result: + result.append(rule_id) + return result + def _priority_action(ap: Dict[str, Any]) -> str: stats = ap["worst_stats"] or {} power = _power_context(ap, ap["worst_band"]) @@ -1333,11 +1469,12 @@ def _priority_action(ap: Dict[str, Any]) -> str: f"{_he((ap.get('severity') or {}).get('label', 'Unknown'))}
    {_he(ap['bubble'])}" f"{_he(_profile_name(ap))}" f"{_he(_priority_action(ap))}" + f"{_source_links(_rule_source_ids(_standards_for_ap(ap)))}" "" for ap in severity_queue[:24] ) if not priority_rows: - priority_rows = 'No APs require immediate RF remediation from this telemetry window.' + priority_rows = 'No APs require immediate RF remediation from this telemetry window.' ap_pages = [] for ap in sorted( @@ -1353,6 +1490,8 @@ def _priority_action(ap: Dict[str, Any]) -> str: severity = ap.get("severity") or _severity(stats) cap = _ap_capability(ap) profile_ctx = _profile_band_context(ap) + log_ctx = ap.get("log_context") or {"summary": "No wireless event log context was captured.", "class": "check-warning"} + standard_links = _source_links(_rule_source_ids(_standards_for_ap(ap))) ap_pages.append( f"""
    @@ -1382,7 +1521,11 @@ def _priority_action(ap: Dict[str, Any]) -> str:
    Recommendation
    -
    {_he(_recommendation(ap))}
    +
    + {_he(_recommendation(ap))} +
    Wireless Event Log Context: {_he(str(log_ctx.get('summary') or 'No wireless event log context was captured.'))} +
    Standards basis: {standard_links} +
    """ @@ -1410,9 +1553,14 @@ def _priority_action(ap: Dict[str, Any]) -> str: Clean bubble means no current overlap symptom. Within range means normal overlap for roaming. Tight bubble means tune channel/power/placement. Too close and WAY TOO CLOSE mean AP-to-AP overlap should be reviewed before adding or replacing APs. RF Noise means non-Wi-Fi energy is saturating the band; find the external source before removing APs. +

    Meraki Standards Basis

    + + + {_rules_table(['utilization-50-plus', 'non-wifi-noise', 'high-density-channel-width', 'auto-rf-domain', 'site-survey', '6ghz-afc', 'event-log-correlation'])} +
    GuidanceHow This Report Uses ItOfficial Reference

    Interference Severity Queue

    - + {priority_rows}
    SiteAPModel CapabilityBandSeverity / SymptomRF ProfileGuidance
    SiteAPModel CapabilityBandSeverity / SymptomRF ProfileGuidanceStandards Basis
    diff --git a/tests/test_report.py b/tests/test_report.py index 9d800cb..21a75cb 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -871,10 +871,33 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): ), encoding="utf-8", ) + (tmp_path / "wireless_event_log.json").write_text( + json.dumps( + { + "N_test_001": { + "events": [ + { + "occurredAt": "2026-05-05T12:00:00Z", + "type": "association_fail", + "description": "802.11 association failure", + "category": "80211", + "deviceSerial": "Q2AP-TEST-0001", + "deviceName": "AP-1F-01", + } + ] + } + } + ), + encoding="utf-8", + ) html = build_org_report(str(tmp_path), "AP Spectrum Test", report_kind="ap_spectrum") assert "AP Spectrum Availability & Interference Report" in html assert html.count("ap-unit-page") >= 2 + assert "Meraki Standards Basis" in html + assert "High Density Wi-Fi Deployments" in html + assert "Wireless Event Log Context" in html + assert "association_fail" in html assert "WAY TOO CLOSE / saturated RF bubble" in html assert "Same-Band Context / Overlap Candidates" in html assert "Current RF profile: Classroom Low Power (exact AP assignment)" in html From bc120d9abf50d4e2e7a81dc7a2fafec17aa568e6 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 17:38:14 -0400 Subject: [PATCH 09/47] Fix Meraki AP telemetry collection --- meraki_backup.py | 10 ++++---- reporting/sections.py | 55 +++++++++++++++++++++++++++++++++++++------ tests/test_backup.py | 18 ++++++++++++++ tests/test_report.py | 19 +++++++++++++++ 4 files changed, 90 insertions(+), 12 deletions(-) diff --git a/meraki_backup.py b/meraki_backup.py index 773625e..2226949 100755 --- a/meraki_backup.py +++ b/meraki_backup.py @@ -1198,8 +1198,8 @@ def _cached_safe_get(filename: str, path_suffix: str, label: str, params=None) - f"/organizations/{org_id}/wireless/rfProfiles/assignments/byDevice", api_key, params={ - "productTypes": ["wireless"], - "networkIds": network_id_filter, + "productTypes[]": ["wireless"], + "networkIds[]": network_id_filter, }, ) if rf_assign_err: @@ -1526,16 +1526,16 @@ def _load_or_fetch_net(filename: str, fetcher: Callable[[], Tuple[Any, Optional[ switch_findings = recommend_switch_ports(port_statuses, port_configs) poe_summary = summarize_poe_power(port_statuses, TIMESPAN_24H) _ch_path = _pf("channel_utilization_by_device.json") - if _cache_is_fresh(_ch_path, max_age_h=max_age_h, force=force): + if _cache_is_fresh_success(_ch_path, max_age_h=max_age_h, force=force): channel_utilization = _load_json_file(_ch_path) err = None log_line(log_f, "INFO", f"Channel utilization (cached) for {org_name}") else: - channel_utilization, err = safe_get_one( + channel_utilization, err = safe_paged_get( f"/organizations/{org_id}/wireless/devices/channelUtilization/byDevice", api_key, params={ - "networkIds": [n.get("id") for n in networks if n.get("id")], + "networkIds[]": [n.get("id") for n in networks if n.get("id")], "timespan": 86400, }, ) diff --git a/reporting/sections.py b/reporting/sections.py index a6e1dae..6d3f69d 100644 --- a/reporting/sections.py +++ b/reporting/sections.py @@ -441,10 +441,21 @@ def _build_ap_interference_section( switch_port_statuses_by_switch: Dict[str, Any], ) -> str: if not isinstance(channel_util, list): + error = ( + " ".join(str(channel_util.get("error") or "").split()) + if isinstance(channel_util, dict) and channel_util.get("error") + else "" + ) + body = ( + "Meraki channel-utilization collection failed for this backup, so AP interference cannot be scored until the backup is rerun successfully. " + f"Collection error: {_he(error[:259] + '...' if len(error) > 260 else error)}" + if error + else "No AP channel utilization data was available for interference analysis." + ) return """

    14. AP Interference Audit

    -
    No AP channel utilization data was available for interference analysis.
    +
    """ + body + """
    """ @@ -747,6 +758,17 @@ def _build_ap_spectrum_report( for rule in (wireless_design_reference or {}).get("rules", []) if isinstance(rule, dict) and rule.get("id") } + channel_util_error = ( + str(channel_util.get("error") or "") + if isinstance(channel_util, dict) and channel_util.get("error") + else "" + ) + + def _short_error(error: str, limit: int = 260) -> str: + if not error: + return "" + compact = " ".join(str(error).split()) + return compact if len(compact) <= limit else compact[: limit - 1].rstrip() + "..." def _source_links(source_ids: List[str]) -> str: links = [] @@ -800,7 +822,7 @@ def _band_stats(row: Dict[str, Any]) -> Dict[str, Dict[str, float]]: def _bubble(stats: Dict[str, float] | None) -> Tuple[str, str]: if not stats: - return ("No telemetry", "check-warning") + return ("Missing RF data", "check-warning") wifi = stats.get("wifi", 0.0) total = stats.get("total", 0.0) non_wifi = stats.get("non_wifi", 0.0) @@ -826,7 +848,7 @@ def _severity(stats: Dict[str, float] | None) -> Dict[str, Any]: if not stats: return { "rank": 1, - "label": "No telemetry", + "label": "Missing RF data", "class": "check-warning", "score": 0.0, "action": "Bring AP online or collect fresh channel utilization before making RF decisions.", @@ -1201,10 +1223,11 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: } return {"assoc": 0, "auth": 0, "success": 0} + channel_util_rows = channel_util if isinstance(channel_util, list) else [] util_by_serial = { row.get("serial"): row - for row in channel_util - if isinstance(channel_util, list) and isinstance(row, dict) and row.get("serial") + for row in channel_util_rows + if isinstance(row, dict) and row.get("serial") } ap_records: List[Dict[str, Any]] = [] seen: set[str] = set() @@ -1417,6 +1440,12 @@ def _recommendation(ap: Dict[str, Any]) -> str: if stats.get("non_wifi", 0.0) >= 15: return "Inspect for non-Wi-Fi noise sources near this AP before replacing hardware. New APs will still share the same noisy spectrum. " + power if not ap["bands"]: + if channel_util_error: + return ( + "Channel utilization collection failed for this backup, so no RF conclusion should be made from this page yet. " + "Fix the collection error and rerun the backup/report pipeline before judging AP placement or replacement. " + f"Collection error: {_short_error(channel_util_error)}" + ) return "Re-run the backup after the AP is online and reporting channel utilization; no RF decision should be made from missing telemetry alone." return "No immediate removal recommendation from current telemetry. Keep this AP in the upgrade plan unless the floor plan shows unnecessary overlap. " + power @@ -1476,6 +1505,17 @@ def _priority_action(ap: Dict[str, Any]) -> str: if not priority_rows: priority_rows = 'No APs require immediate RF remediation from this telemetry window.' + telemetry_warning_html = "" + if channel_util_error: + telemetry_warning_html = f""" +
    +
    Telemetry Collection Warning
    +
    + Meraki channel-utilization collection failed for this backup, so AP-level RF bubbles cannot be populated until the backup is rerun successfully. Collection error: {_he(_short_error(channel_util_error))} +
    +
    + """ + ap_pages = [] for ap in sorted( ap_records, @@ -1541,10 +1581,11 @@ def _priority_action(ap: Dict[str, Any]) -> str:
    Too Close
    {len(high_pressure)}
    High co-channel pressure
    RF Noise
    {len(noise_pressure)}
    Non-Wi-Fi interference
    Severe+
    {sum(1 for ap in with_telemetry if (ap.get('severity') or {}).get('rank', 0) >= 5)}
    Fix before refresh decisions
    -
    Missing Data
    {len(no_telemetry)}
    Offline/dormant/no channel data
    +
    Missing RF Data
    {len(no_telemetry)}
    Offline/dormant/no channel data
    + {telemetry_warning_html} - + {site_rows}
    SiteAPsToo CloseTight BubbleNo Telemetry
    SiteAPsToo CloseTight BubbleMissing RF Data
    diff --git a/tests/test_backup.py b/tests/test_backup.py index 5f65efa..1c5d645 100644 --- a/tests/test_backup.py +++ b/tests/test_backup.py @@ -183,6 +183,24 @@ def test_build_url_repeats_array_params_without_bracket_suffix(self): assert "productTypes%5B%5D" not in url assert "networkIds%5B%5D" not in url + def test_build_url_supports_meraki_bracket_array_params(self): + url = mc.build_url( + "/organizations/1/wireless/devices/channelUtilization/byDevice", + {"networkIds[]": ["N_1", "N_2"], "timespan": 86400}, + ) + assert "networkIds%5B%5D=N_1" in url + assert "networkIds%5B%5D=N_2" in url + assert "timespan=86400" in url + + def test_build_url_supports_meraki_multiple_bracket_arrays(self): + url = mc.build_url( + "/organizations/1/wireless/rfProfiles/assignments/byDevice", + {"productTypes[]": ["wireless"], "networkIds[]": ["N_1", "N_2"]}, + ) + assert "productTypes%5B%5D=wireless" in url + assert "networkIds%5B%5D=N_1" in url + assert "networkIds%5B%5D=N_2" in url + def test_shared_paged_get_honors_retry_after(self, monkeypatch): sleeps = [] calls = {"count": 0} diff --git a/tests/test_report.py b/tests/test_report.py index 21a75cb..fd22a3d 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -908,6 +908,25 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): assert "Current severe interference means the organization may not feel the value of this Wi-Fi 6 AP until RF is remediated" in html assert "Executive Summary" not in html + def test_ap_spectrum_surfaces_channel_utilization_collection_error(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + (tmp_path / "channel_utilization_by_device.json").write_text( + json.dumps({"error": "HTTP 400: networkIds must be an array"}), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "AP Spectrum Test", report_kind="ap_spectrum") + assert "Telemetry Collection Warning" in html + assert "Channel utilization collection failed for this backup" in html + assert "networkIds must be an array" in html + def test_ap_spectrum_distinguishes_external_noise_from_ap_overlap(self, tmp_path): from reporting.app import build_org_report From 191f104e39a53d39786f7933e4db70b972588eaf Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 17:49:03 -0400 Subject: [PATCH 10/47] Add standalone battery backup report --- reporting/app.py | 37 +++++++++++++++++++++++++++++++++++++ tests/test_pipeline.py | 5 +++++ tests/test_report.py | 7 +++++++ 3 files changed, 49 insertions(+) diff --git a/reporting/app.py b/reporting/app.py index 277d9da..85b4851 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -696,6 +696,40 @@ def generate_org_reports( html_targets.extend([latest_backup_html_alias, latest_backup_html_compat]) _cleanup_paths(tuple(path for path in html_targets if path)) + battery_body = build_org_report(source_dir, org_name, report_kind="battery_backup") + battery_html = build_html(f"{org_name} — Battery Backup Pricing & Runtime Calculation", battery_body) + battery_html_path = os.path.join(output_dir, f"{_slug}_{_stamp}_battery_backup_report.html") + battery_pdf_path = os.path.join(output_dir, f"{_slug}_{_stamp}_battery_backup_report.pdf") + battery_named_html_alias = os.path.join(output_dir, _dated_report_name(org_name, "Battery_Backup_Pricing_Calculation", _run_ts, "html")) + battery_named_pdf_alias = os.path.join(output_dir, _dated_report_name(org_name, "Battery_Backup_Pricing_Calculation", _run_ts, "pdf")) + battery_html_alias = os.path.join(output_dir, "report_battery_backup.html") + battery_pdf_alias = os.path.join(output_dir, "report_battery_backup.pdf") + if latest_dir: + battery_html_path = battery_named_html_alias + battery_pdf_path = battery_named_pdf_alias + battery_html_alias = None + battery_pdf_alias = None + latest_battery_html_alias = os.path.join(latest_dir, _dated_report_name(org_name, "Battery_Backup_Pricing_Calculation", _run_ts, "html")) if latest_dir else None + latest_battery_pdf_alias = os.path.join(latest_dir, _dated_report_name(org_name, "Battery_Backup_Pricing_Calculation", _run_ts, "pdf")) if latest_dir else None + latest_battery_html_compat = os.path.join(latest_dir, "report_battery_backup.html") if latest_dir else None + latest_battery_pdf_compat = os.path.join(latest_dir, "report_battery_backup.pdf") if latest_dir else None + _write_text_aliases(battery_html, (battery_html_path, battery_named_html_alias, battery_html_alias)) + if latest_dir: + _write_text_aliases(battery_html, (latest_battery_html_alias, latest_battery_html_compat)) + battery_pdf_ok = write_pdf(battery_html_path, battery_pdf_path) + if battery_pdf_ok: + _copy_existing(battery_pdf_path, (battery_named_pdf_alias, battery_pdf_alias)) + if latest_dir: + _copy_existing(battery_pdf_path, (latest_battery_pdf_alias, latest_battery_pdf_compat)) + log.info("Battery Backup PDF → %s", battery_named_pdf_alias) + else: + log.info("Battery Backup HTML → %s (no PDF tool found)", battery_html_path) + if not keep_html and battery_pdf_ok: + html_targets = [battery_html_path, battery_named_html_alias, battery_html_alias] + if latest_dir: + html_targets.extend([latest_battery_html_alias, latest_battery_html_compat]) + _cleanup_paths(tuple(path for path in html_targets if path)) + ap_spectrum_body = build_org_report(source_dir, org_name, report_kind="ap_spectrum") ap_spectrum_html = build_html(f"{org_name} — AP Spectrum & Interference Report", ap_spectrum_body) ap_spectrum_html_path = os.path.join(output_dir, f"{_slug}_{_stamp}_ap_spectrum_report.html") @@ -4715,6 +4749,7 @@ def _phase_amount(*categories: str, field: str = "hardware") -> int: ) exec_body = cover_html + _schema_banner + exec_html + report_guide_html + end_report_html ap_spectrum_body = cover_html + _schema_banner + ap_spectrum_html + end_report_html + battery_body = cover_html + _schema_banner + ups_html + end_report_html backup_body = ( cover_html + _schema_banner @@ -4731,6 +4766,8 @@ def _phase_amount(*categories: str, field: str = "hardware") -> int: if report_kind == "exec": return exec_body + if report_kind in {"battery_backup", "battery-backup", "battery", "ups", "ups_runtime"}: + return battery_body if report_kind in {"ap_spectrum", "ap-spectrum", "ap_interference"}: return ap_spectrum_body if report_kind == "backup": diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py index 6405932..e5542cf 100644 --- a/tests/test_pipeline.py +++ b/tests/test_pipeline.py @@ -274,11 +274,13 @@ def fake_write_pdf(html_path, pdf_path): "--fixed-now", "2026-05-02T21:30:00", ]) == 0 assert (output / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() + assert (output / "Demo_Org_Battery_Backup_Pricing_Calculation_Report_2026-05-02.pdf").exists() assert (output / "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf").exists() assert (output / "Demo_Org_UPS_Switch_Power_Plan_Report_2026-05-02.json").exists() assert (output / "ups_switch_power_plan.json").exists() assert (output / "Demo_Org_2026-05-02_2130_report.pdf").exists() assert (output / "report.pdf").exists() + assert (output / "report_battery_backup.pdf").exists() assert (output / "report_ap_spectrum.pdf").exists() def test_reports_dir_writes_run_and_latest_without_html_when_pdf_only(self, monkeypatch, tmp_path): @@ -307,14 +309,17 @@ def fake_write_pdf(html_path, pdf_path): run_dir = reports / "Demo_Org" / "2026-05-02_2130" latest_dir = reports / "latest" / "Demo_Org" assert (run_dir / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() + assert (run_dir / "Demo_Org_Battery_Backup_Pricing_Calculation_Report_2026-05-02.pdf").exists() assert (run_dir / "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf").exists() assert (run_dir / "Demo_Org_UPS_Switch_Power_Plan_Report_2026-05-02.json").exists() assert (run_dir / "ups_switch_power_plan.json").exists() assert (latest_dir / "Demo_Org_Complete_Report_2026-05-02.pdf").exists() + assert (latest_dir / "Demo_Org_Battery_Backup_Pricing_Calculation_Report_2026-05-02.pdf").exists() assert (latest_dir / "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf").exists() assert (latest_dir / "Demo_Org_UPS_Switch_Power_Plan_Report_2026-05-02.json").exists() assert (latest_dir / "ups_switch_power_plan.json").exists() assert (latest_dir / "report.pdf").exists() + assert (latest_dir / "report_battery_backup.pdf").exists() assert (latest_dir / "report_ap_spectrum.pdf").exists() assert not (run_dir / "report.pdf").exists() assert not (run_dir / "Demo_Org_2026-05-02_2130_report.pdf").exists() diff --git a/tests/test_report.py b/tests/test_report.py index fd22a3d..3f896d1 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -248,6 +248,13 @@ def test_ups_runtime_planning_uses_poe_and_apc_reference(self, tmp_path): assert "10% planning buffer" in html assert "ups_switch_power_plan.json" in html assert "1 UPS + 1 external battery module" in html + + battery_html = build_org_report(str(tmp_path), "UPS Test", report_kind="battery_backup") + assert "Battery Backup Runtime Planning" in battery_html + assert "UPS Runtime Estimate by Switch" in battery_html + assert "Core-SW-1 (Q2SW-TEST-0001)" in battery_html + assert "97.5 W" in battery_html + assert "Executive Summary" not in battery_html assert "$3,487.04" in html def test_ups_power_plan_json_payload_includes_buffered_switch_load(self, tmp_path): From 07eb37242a6705e089d75edf68950e0db81f5997 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 18:03:18 -0400 Subject: [PATCH 11/47] Add executive guidance to AP and UPS reports --- reporting/app.py | 82 +++++++++++++++++++++++++++ reporting/sections.py | 125 +++++++++++++++++++++++++++++++++++++++--- tests/test_report.py | 27 ++++++++- 3 files changed, 225 insertions(+), 9 deletions(-) diff --git a/reporting/app.py b/reporting/app.py index 85b4851..f96b035 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -3315,6 +3315,78 @@ def _infer_product(*versions: Any) -> str: smx_max = smx_ref.get("max_watts") if isinstance(smx_ref, dict) else None smx_unit = smx_ref.get("unit_cost") if isinstance(smx_ref, dict) else None smx_ext = smx_ref.get("external_battery_unit_cost") if isinstance(smx_ref, dict) else None + ups_switch_items = ups_power_plan.get("switches", []) if isinstance(ups_power_plan.get("switches"), list) else [] + target_stacks = [ + ((item.get("runtimeEstimates") or {}).get("SMX2200RMLV2UTargetStack") or {}) + for item in ups_switch_items + if isinstance(item, dict) + ] + target_costs = [ + float(stack.get("estimatedCost")) + for stack in target_stacks + if isinstance(stack.get("estimatedCost"), (int, float)) + ] + total_target_cost = sum(target_costs) if target_costs else None + max_external_batteries = max( + [int(stack.get("externalBatteryCount") or 0) for stack in target_stacks if stack.get("externalBatteryCount") is not None], + default=0, + ) + no_target_stack_count = sum(1 for stack in target_stacks if stack.get("runtimeMinutes") is None) + bx_runtime_minutes = [ + float(((item.get("runtimeEstimates") or {}).get("BX1500M") or {}).get("runtimeMinutes")) + for item in ups_switch_items + if isinstance((((item.get("runtimeEstimates") or {}).get("BX1500M") or {}).get("runtimeMinutes")), (int, float)) + ] + smx_base_runtime_minutes = [ + float(((item.get("runtimeEstimates") or {}).get("SMX2200RMLV2UBase") or {}).get("runtimeMinutes")) + for item in ups_switch_items + if isinstance((((item.get("runtimeEstimates") or {}).get("SMX2200RMLV2UBase") or {}).get("runtimeMinutes")), (int, float)) + ] + smx_base_below_target_count = sum(1 for mins in smx_base_runtime_minutes if mins < ups_target_hours * 60) + site_plan_summary = ups_power_plan.get("sites") if isinstance(ups_power_plan.get("sites"), dict) else {} + heaviest_site = "" + if site_plan_summary: + heaviest_site, _heaviest_data = max( + site_plan_summary.items(), + key=lambda item: float((item[1] or {}).get("totalSizingLoadWatts") or 0) if isinstance(item[1], dict) else 0, + ) + battery_recommendations = [] + if ups_rows: + battery_recommendations.append( + f"Use the Smart-UPS X stack as the planning standard for network closets that need the {ups_target_hours:g} hour runtime target; the BX1500M should be treated as a short-runtime single-switch fallback." + ) + if total_target_cost is not None: + battery_recommendations.append( + f"Budget approximately {_format_money(total_target_cost)} for the modeled target-runtime switch stacks in this report, before installation, electrical work, tax, shipping, or spares." + ) + if smx_base_below_target_count: + battery_recommendations.append( + f"The base SMX2200RMLV2U alone is below the {ups_target_hours:g} hour target for {smx_base_below_target_count} switch load(s), so external battery modules are required where extended runtime is expected." + ) + if max_external_batteries: + battery_recommendations.append( + f"The largest modeled stack requires {max_external_batteries} external battery module(s); validate rack space, circuit capacity, and battery maintenance before quoting." + ) + if no_target_stack_count: + battery_recommendations.append( + f"{no_target_stack_count} switch load(s) did not reach the target with the available runtime chart, so those closets need manual UPS sizing." + ) + if heaviest_site: + battery_recommendations.append( + f"Highest aggregate sizing load is at {heaviest_site}; start validation there before standardizing smaller closets." + ) + else: + battery_recommendations.append("No switch loads were available, so no UPS purchase action should be taken from this report yet.") + bx_window = ( + f"{_format_runtime_minutes(min(bx_runtime_minutes))} to {_format_runtime_minutes(max(bx_runtime_minutes))}" + if bx_runtime_minutes + else "not available" + ) + smx_base_window = ( + f"{_format_runtime_minutes(min(smx_base_runtime_minutes))} to {_format_runtime_minutes(max(smx_base_runtime_minutes))}" + if smx_base_runtime_minutes + else "not available" + ) ups_source_links = "" if isinstance(ups_meta, dict) and isinstance(ups_meta.get("sources"), list): links = [] @@ -3352,6 +3424,16 @@ def _infer_product(*versions: Any) -> str:
    Smart-UPS external battery stack
    +
    +
    Executive Recommendation
    +
    + The practical planning recommendation is to use the APC Smart-UPS X plus external battery modules for closets where extended runtime matters, and reserve the BX1500M class for small, non-critical edge switches where short runtime is acceptable. +
      + {''.join(f'
    • {_he(point)}
    • ' for point in battery_recommendations)} +
    + Runtime read: BX1500M estimated range is {_he(bx_window)} across modeled switch loads; base SMX2200RMLV2U estimated range is {_he(smx_base_window)} before adding external modules. +
    +
    Sizing Method
    diff --git a/reporting/sections.py b/reporting/sections.py index 6d3f69d..bd69487 100644 --- a/reporting/sections.py +++ b/reporting/sections.py @@ -1117,6 +1117,12 @@ def _profile_name(ap: Dict[str, Any]) -> str: _, exact, name = _assigned_profile(ap) return name if exact else f"{name} (fallback)" if name != "Profile assignment not captured" else name + def _ap_status(ap: Dict[str, Any]) -> str: + return str(ap.get("status") or "unknown").strip().lower() or "unknown" + + def _is_inactive_ap(ap: Dict[str, Any]) -> bool: + return _ap_status(ap) in {"dormant", "offline"} + def _ap_capability(ap: Dict[str, Any]) -> Dict[str, Any]: model = str(ap.get("model") or "") ref = catalog_models.get(model) if isinstance(catalog_models, dict) else None @@ -1326,16 +1332,29 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: tight_pressure = [ap for ap in with_telemetry if "Tight" in ap["bubble"]] noise_pressure = [ap for ap in with_telemetry if "non-Wi-Fi" in ap["bubble"] or "External RF" in ap["bubble"]] no_telemetry = [ap for ap in ap_records if not ap["bands"]] + inactive_no_telemetry = [ap for ap in no_telemetry if _is_inactive_ap(ap)] + online_no_telemetry = [ap for ap in no_telemetry if not _is_inactive_ap(ap)] + severe_plus = [ap for ap in with_telemetry if (ap.get("severity") or {}).get("rank", 0) >= 5] + six_ghz_value_blocked = [ + ap for ap in ap_records + if _ap_capability(ap).get("sixGhzCapable") + and "6 GHz capable AP, but" in _value_assessment(ap) + ] site_counts: Dict[str, Dict[str, int]] = {} for ap in ap_records: - site = site_counts.setdefault(ap["site"], {"aps": 0, "high": 0, "tight": 0, "missing": 0}) + site = site_counts.setdefault( + ap["site"], + {"aps": 0, "high": 0, "tight": 0, "missing_online": 0, "inactive_missing": 0}, + ) site["aps"] += 1 if ap in high_pressure: site["high"] += 1 if ap in tight_pressure: site["tight"] += 1 - if not ap["bands"]: - site["missing"] += 1 + if ap in online_no_telemetry: + site["missing_online"] += 1 + if ap in inactive_no_telemetry: + site["inactive_missing"] += 1 site_rows = "".join( "" @@ -1343,11 +1362,86 @@ def _client_stats(serial: str, net_id: str) -> Dict[str, int]: f"{counts['aps']}" f"{counts['high']}" f"{counts['tight']}" - f"{counts['missing']}" + f"{counts['missing_online']}" + f"{counts['inactive_missing']}" "" for site, counts in sorted(site_counts.items()) ) + def _ap_list(items: List[Dict[str, Any]], limit: int = 4) -> str: + names = [_he(str(ap.get("name") or ap.get("serial") or "Unknown AP")) for ap in items[:limit]] + if len(items) > limit: + names.append(f"{len(items) - limit} more") + return ", ".join(names) if names else "none" + + executive_points = [] + if severe_plus: + executive_points.append( + f"Remediate {len(severe_plus)} severe/critical AP RF finding(s) before judging refresh hardware value. Highest priority: {_ap_list(severe_plus)}." + ) + elif high_pressure or tight_pressure or noise_pressure: + executive_points.append( + f"Tune RF before a one-for-one replacement: {len(high_pressure)} too-close AP(s), {len(tight_pressure)} tight-bubble AP(s), and {len(noise_pressure)} AP(s) with non-Wi-Fi noise were observed." + ) + else: + executive_points.append("No severe AP spectrum remediation is indicated by this telemetry window.") + if noise_pressure: + executive_points.append( + f"Treat {len(noise_pressure)} AP(s) as possible environmental RF-noise cases; use Dashboard spectrum tools or a field survey before removing APs." + ) + if high_pressure: + executive_points.append( + f"Validate floor plans for {len(high_pressure)} AP(s) showing excessive co-channel pressure; removal, relocation, or lower transmit power may help more than adding hardware." + ) + if six_ghz_value_blocked: + executive_points.append( + f"{len(six_ghz_value_blocked)} 6 GHz-capable AP(s) appear constrained by RF/SSID profile settings, so verify 6 GHz enablement before assuming the site is getting full Wi-Fi 6E/7 value." + ) + if online_no_telemetry: + executive_points.append( + f"{len(online_no_telemetry)} online/unknown AP(s) returned no per-band channel samples. Re-run collection and check Dashboard/AP health before making placement decisions for: {_ap_list(online_no_telemetry)}." + ) + if inactive_no_telemetry: + executive_points.append( + f"{len(inactive_no_telemetry)} dormant/offline AP(s) did not return RF samples. Treat them as inventory cleanup or reactivation candidates, not active RF design evidence." + ) + + executive_summary_html = f""" +
    +
    Executive Summary / Recommended Action
    +
    + This site has {len(with_telemetry)} AP(s) with usable RF telemetry out of {len(ap_records)} AP inventory record(s). The recommended action is to fix severe RF noise/overlap first, clean up dormant inventory, then use the remaining AP pages as the refresh planning baseline. +
      + {''.join(f'
    • {point}
    • ' for point in executive_points)} +
    +
    +
    + """ + + def _missing_rf_action(ap: Dict[str, Any]) -> str: + status = _ap_status(ap) + if status == "online": + return "Online but no channel samples were returned; rerun collection, verify Dashboard channel utilization, and inspect AP health." + if status == "dormant": + return "Dormant inventory; exclude from active RF conclusions until it checks in again." + if status == "offline": + return "Offline during collection; restore or retire before using it in RF planning." + return "No channel samples returned; verify status and rerun collection before making RF decisions." + + missing_rf_rows = "".join( + "" + f"{_he(ap['site'])}" + f"{_he(ap['name'])}
    {_he(ap['serial'])}" + f"{_he(ap['model'] or 'Unknown')}" + f"{_he(ap['status'])}" + f"{_he(_profile_name(ap))}" + f"{_he(_missing_rf_action(ap))}" + "" + for ap in sorted(no_telemetry, key=lambda item: (_is_inactive_ap(item), item["site"], item["name"]))[:40] + ) + if not missing_rf_rows: + missing_rf_rows = 'All AP inventory records returned usable per-band RF telemetry in this backup.' + def _candidate_rows(ap: Dict[str, Any]) -> str: band = ap["worst_band"] stats = ap["worst_stats"] or {} @@ -1446,7 +1540,15 @@ def _recommendation(ap: Dict[str, Any]) -> str: "Fix the collection error and rerun the backup/report pipeline before judging AP placement or replacement. " f"Collection error: {_short_error(channel_util_error)}" ) - return "Re-run the backup after the AP is online and reporting channel utilization; no RF decision should be made from missing telemetry alone." + if _is_inactive_ap(ap): + return ( + f"This AP was {_ap_status(ap)} and did not return per-band RF samples. " + "Do not count it as active RF coverage or active interference until it is restored and reporting channel utilization." + ) + return ( + "This AP did not return per-band RF samples even though it was not marked dormant/offline in the inventory. " + "Re-run collection and check Dashboard channel utilization/AP health before making placement or replacement decisions." + ) return "No immediate removal recommendation from current telemetry. Keep this AP in the upgrade plan unless the floor plan shows unnecessary overlap. " + power def _standards_for_ap(ap: Dict[str, Any]) -> List[str]: @@ -1580,14 +1682,21 @@ def _priority_action(ap: Dict[str, Any]) -> str:
    RF Telemetry
    {len(with_telemetry)}
    APs with channel utilization
    Too Close
    {len(high_pressure)}
    High co-channel pressure
    RF Noise
    {len(noise_pressure)}
    Non-Wi-Fi interference
    -
    Severe+
    {sum(1 for ap in with_telemetry if (ap.get('severity') or {}).get('rank', 0) >= 5)}
    Fix before refresh decisions
    -
    Missing RF Data
    {len(no_telemetry)}
    Offline/dormant/no channel data
    +
    Severe+
    {len(severe_plus)}
    Fix before refresh decisions
    +
    Online Missing RF
    {len(online_no_telemetry)}
    Needs collection/AP health review
    +
    Dormant/Offline
    {len(inactive_no_telemetry)}
    Inventory cleanup / inactive APs
    + {executive_summary_html} {telemetry_warning_html} - + {site_rows}
    SiteAPsToo CloseTight BubbleMissing RF Data
    SiteAPsToo CloseTight BubbleOnline Missing RFDormant/Offline Missing
    +

    RF Telemetry Gaps

    + + + {missing_rf_rows} +
    SiteAPModelStatusRF ProfileAction
    How To Read The Bubble Scale
    diff --git a/tests/test_report.py b/tests/test_report.py index 3f896d1..6080622 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -248,12 +248,15 @@ def test_ups_runtime_planning_uses_poe_and_apc_reference(self, tmp_path): assert "10% planning buffer" in html assert "ups_switch_power_plan.json" in html assert "1 UPS + 1 external battery module" in html + assert "Executive Recommendation" in html + assert "Use the Smart-UPS X stack as the planning standard" in html battery_html = build_org_report(str(tmp_path), "UPS Test", report_kind="battery_backup") assert "Battery Backup Runtime Planning" in battery_html assert "UPS Runtime Estimate by Switch" in battery_html assert "Core-SW-1 (Q2SW-TEST-0001)" in battery_html assert "97.5 W" in battery_html + assert "Executive Recommendation" in battery_html assert "Executive Summary" not in battery_html assert "$3,487.04" in html @@ -810,6 +813,19 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): if os.path.isfile(src): shutil.copy(src, dst) + devices = json.loads((tmp_path / "devices_availabilities.json").read_text(encoding="utf-8")) + devices.append( + { + "serial": "Q2AP-TEST-0003", + "name": "AP-1F-03", + "productType": "wireless", + "model": "MR46", + "status": "online", + "networkId": "N_test_001", + } + ) + (tmp_path / "devices_availabilities.json").write_text(json.dumps(devices), encoding="utf-8") + (tmp_path / "channel_utilization_by_device.json").write_text( json.dumps( [ @@ -837,6 +853,11 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): } ], }, + { + "serial": "Q2AP-TEST-0003", + "network": {"id": "N_test_001"}, + "byBand": [], + }, ] ), encoding="utf-8", @@ -901,7 +922,11 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): html = build_org_report(str(tmp_path), "AP Spectrum Test", report_kind="ap_spectrum") assert "AP Spectrum Availability & Interference Report" in html assert html.count("ap-unit-page") >= 2 + assert "Executive Summary / Recommended Action" in html assert "Meraki Standards Basis" in html + assert "RF Telemetry Gaps" in html + assert "Online Missing RF" in html + assert "Online but no channel samples were returned" in html assert "High Density Wi-Fi Deployments" in html assert "Wireless Event Log Context" in html assert "association_fail" in html @@ -913,7 +938,7 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): assert "RF / Hardware Fit" in html assert "Wi-Fi 6 / 802.11ax / 2.4, 5 GHz" in html assert "Current severe interference means the organization may not feel the value of this Wi-Fi 6 AP until RF is remediated" in html - assert "Executive Summary" not in html + assert "Network Overview" not in html def test_ap_spectrum_surfaces_channel_utilization_collection_error(self, tmp_path): from reporting.app import build_org_report From f1eefef8e00531d225e2a24b6c820db3f3620ba0 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 20:47:12 -0400 Subject: [PATCH 12/47] Add UPS offering pricing summary --- reporting/app.py | 48 +++++++++++++++++++ .../reference/ups_runtime_reference.json | 8 +++- tests/test_report.py | 6 +++ 3 files changed, 60 insertions(+), 2 deletions(-) diff --git a/reporting/app.py b/reporting/app.py index f96b035..c44efb5 100644 --- a/reporting/app.py +++ b/reporting/app.py @@ -3313,9 +3313,12 @@ def _infer_product(*versions: Any) -> str: max_ups_load = float(ups_summary.get("maxSizingLoadWatts") or 0) bx_max = bx_ref.get("max_watts") if isinstance(bx_ref, dict) else None smx_max = smx_ref.get("max_watts") if isinstance(smx_ref, dict) else None + bx_unit = bx_ref.get("unit_cost") if isinstance(bx_ref, dict) else None smx_unit = smx_ref.get("unit_cost") if isinstance(smx_ref, dict) else None smx_ext = smx_ref.get("external_battery_unit_cost") if isinstance(smx_ref, dict) else None + smx_ext_sku = str(smx_ref.get("external_battery_sku") or "SMX120RMBP2U") if isinstance(smx_ref, dict) else "SMX120RMBP2U" ups_switch_items = ups_power_plan.get("switches", []) if isinstance(ups_power_plan.get("switches"), list) else [] + ups_switch_count = len(ups_switch_items) target_stacks = [ ((item.get("runtimeEstimates") or {}).get("SMX2200RMLV2UTargetStack") or {}) for item in ups_switch_items @@ -3331,6 +3334,11 @@ def _infer_product(*versions: Any) -> str: [int(stack.get("externalBatteryCount") or 0) for stack in target_stacks if stack.get("externalBatteryCount") is not None], default=0, ) + target_external_battery_count = sum( + int(stack.get("externalBatteryCount") or 0) + for stack in target_stacks + if stack.get("externalBatteryCount") is not None + ) no_target_stack_count = sum(1 for stack in target_stacks if stack.get("runtimeMinutes") is None) bx_runtime_minutes = [ float(((item.get("runtimeEstimates") or {}).get("BX1500M") or {}).get("runtimeMinutes")) @@ -3387,6 +3395,41 @@ def _infer_product(*versions: Any) -> str: if smx_base_runtime_minutes else "not available" ) + bx_total = bx_unit * ups_switch_count if isinstance(bx_unit, (int, float)) else None + smx_base_total = smx_unit * ups_switch_count if isinstance(smx_unit, (int, float)) else None + smx_external_total = ( + smx_ext * target_external_battery_count + if isinstance(smx_ext, (int, float)) + else None + ) + smx_target_total = ( + smx_base_total + smx_external_total + if isinstance(smx_base_total, (int, float)) and isinstance(smx_external_total, (int, float)) + else total_target_cost + ) + ups_offering_rows = [ + [ + "Short-runtime tower fallback", + f"{ups_switch_count} x BX1500M", + f"{_format_money(bx_unit)} / unit", + _format_money(bx_total), + f"{bx_window}; useful for graceful shutdown or brief outages, not the {ups_target_hours:g}h closet target.", + ], + [ + "Base rack/tower Smart-UPS", + f"{ups_switch_count} x SMX2200RMLV2U", + f"{_format_money(smx_unit)} / unit", + _format_money(smx_base_total), + f"{smx_base_window}; below the {ups_target_hours:g}h target for {smx_base_below_target_count} modeled switch load(s).", + ], + [ + f"Target-runtime Smart-UPS stack ({ups_target_hours:g}h planning)", + f"{ups_switch_count} x SMX2200RMLV2U + {target_external_battery_count} x {smx_ext_sku}", + f"{_format_money(smx_unit)} UPS; {_format_money(smx_ext)} battery", + _format_money(smx_target_total), + f"Recommended planning bundle from the per-switch runtime table; largest individual stack uses {max_external_batteries} external battery module(s).", + ], + ] ups_source_links = "" if isinstance(ups_meta, dict) and isinstance(ups_meta.get("sources"), list): links = [] @@ -3424,6 +3467,11 @@ def _infer_product(*versions: Any) -> str:
    Smart-UPS external battery stack
    + {render_section( + "UPS Offering Price Summary", + ups_offering_rows, + headers=["Offering", "Procurement Quantity", "Reference Unit Price", "Estimated Equipment Cost", "Planning Read"], + ) if ups_rows else ""}
    Executive Recommendation
    diff --git a/reporting/reference/ups_runtime_reference.json b/reporting/reference/ups_runtime_reference.json index 05d5ace..228f9a4 100644 --- a/reporting/reference/ups_runtime_reference.json +++ b/reporting/reference/ups_runtime_reference.json @@ -25,6 +25,10 @@ { "title": "APCGuard BX1500M runtime chart reference", "url": "https://www.apcguard.com/BR1500MS.asp" + }, + { + "title": "OMNIA/NCPA price list reference for BX1500M", + "url": "https://www.omniapartners.com/suppliers-files/A-D/D_H_Distributing/Contract_Documents/01-168/2-Copy_of_NCPA_Price_List_Nov_23__1_.pdf" } ] }, @@ -58,8 +62,8 @@ "sku": "BX1500M", "max_watts": 900, "max_va": 1500, - "unit_cost": null, - "cost_note": "Unit cost not provided in the current planning prompt.", + "unit_cost": 219.99, + "cost_note": "Planning unit cost from public OMNIA/NCPA price-list reference; validate current seller pricing before procurement.", "configuration_label": "1 tower UPS", "runtime_points_minutes": [ {"watts": 50, "minutes": 134}, diff --git a/tests/test_report.py b/tests/test_report.py index 6080622..54711f3 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -250,6 +250,11 @@ def test_ups_runtime_planning_uses_poe_and_apc_reference(self, tmp_path): assert "1 UPS + 1 external battery module" in html assert "Executive Recommendation" in html assert "Use the Smart-UPS X stack as the planning standard" in html + assert "UPS Offering Price Summary" in html + assert "3 x BX1500M" in html + assert "$219.99 / unit" in html + assert "3 x SMX2200RMLV2U" in html + assert "SMX120RMBP2U" in html battery_html = build_org_report(str(tmp_path), "UPS Test", report_kind="battery_backup") assert "Battery Backup Runtime Planning" in battery_html @@ -257,6 +262,7 @@ def test_ups_runtime_planning_uses_poe_and_apc_reference(self, tmp_path): assert "Core-SW-1 (Q2SW-TEST-0001)" in battery_html assert "97.5 W" in battery_html assert "Executive Recommendation" in battery_html + assert "UPS Offering Price Summary" in battery_html assert "Executive Summary" not in battery_html assert "$3,487.04" in html From c8ef563a2b1b41f97b6b8393debc045676bcd449 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 21:13:11 -0400 Subject: [PATCH 13/47] Validate generated report inventory --- README.md | 8 ++- ROADMAP.md | 3 + report_inventory.py | 8 +++ reporting/report_inventory.py | 118 ++++++++++++++++++++++++++++++++++ run.sh | 4 ++ tests/test_pipeline.py | 39 +++++++++++ 6 files changed, 179 insertions(+), 1 deletion(-) create mode 100644 report_inventory.py create mode 100644 reporting/report_inventory.py diff --git a/README.md b/README.md index 95793dd..848f9be 100644 --- a/README.md +++ b/README.md @@ -11,6 +11,7 @@ A reporting pipeline that collects Meraki org data, generates network health and | `ollama_review.py` | Optional local LLM review stage | | `python -m reporting` | Direct report generation from existing backup data | | `report_generator.py` | Compatibility wrapper for report generation | +| `report_inventory.py` | Validates the expected latest report deliverables after generation | | `run.sh` | Full pipeline orchestrator | | `legacy/` | Original MX baseline scripts (reference only) | | `docs/cis-meraki-reference.md` | CIS Controls to Meraki reference mapping | @@ -70,7 +71,9 @@ ollama pull gemma4:e2b ## Output `./run.sh` keeps raw Meraki backup data in `backups//` and writes generated -shareable reports to `reports/` (both gitignored): +shareable reports to `reports/` (both gitignored). By default, `./run.sh` runs +the full pipeline: Meraki query, backup, recommendation merge, optional AI review, +report generation, and a final deliverable inventory check. - `recommendations.md` — per-org findings and recommendations - `backups/master_recommendations.md` — combined across all orgs @@ -78,6 +81,9 @@ shareable reports to `reports/` (both gitignored): - `reports///SITE_NAME_Complete_Report_YYYY-MM-DD.pdf` — run-specific full report - `reports///SITE_NAME_Executive_Summary_Report_YYYY-MM-DD.pdf` — run-specific executive summary - `reports///SITE_NAME_Backup_Settings_Report_YYYY-MM-DD.pdf` — run-specific backup settings report +- `reports///SITE_NAME_Battery_Backup_Pricing_Calculation_Report_YYYY-MM-DD.pdf` — run-specific UPS runtime and pricing report +- `reports///SITE_NAME_AP_Spectrum_Report_YYYY-MM-DD.pdf` — run-specific AP spectrum and interference report +- `reports///SITE_NAME_UPS_Switch_Power_Plan_Report_YYYY-MM-DD.json` — run-specific UPS sizing data - `reports/latest//report.pdf` — compatibility alias for the latest full report By default `run.sh` passes `--pdf-only`, so generated HTML is removed after PDFs diff --git a/ROADMAP.md b/ROADMAP.md index 173a210..c54522e 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -17,6 +17,8 @@ This project is currently functional as a Python reporting pipeline. The immedia - Ollama review unloads the active model after each generation pass to reduce idle RAM usage. - Deterministic report generation is available with `./run.sh --fixed-now ...`, `python -m reporting --fixed-now ...`, or `MERAKI_REPORT_FIXED_NOW`. +- `./run.sh` remains the full default pipeline and now validates the generated latest + report deliverables after report generation. ## Phase 1: Stabilize The Existing Python App - Complete @@ -85,6 +87,7 @@ This project is currently functional as a Python reporting pipeline. The immedia - ~~Replace unreliable wireless-only client collection with network-wide client collection and report wired/wireless client detail coverage.~~ - ~~Separate generated report deliverables into `reports/` and keep `backups/` focused on raw collection data.~~ - ~~Add PDF-only output mode so routine runs do not retain generated HTML unless requested.~~ +- ~~Add a final report inventory check so missing generated deliverables fail the run visibly.~~ ## Phase 5: Optional Interfaces diff --git a/report_inventory.py b/report_inventory.py new file mode 100644 index 0000000..a0904c1 --- /dev/null +++ b/report_inventory.py @@ -0,0 +1,8 @@ +#!/usr/bin/env python3 +"""Compatibility wrapper for generated report inventory validation.""" + +from reporting.report_inventory import main + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/reporting/report_inventory.py b/reporting/report_inventory.py new file mode 100644 index 0000000..6025c93 --- /dev/null +++ b/reporting/report_inventory.py @@ -0,0 +1,118 @@ +"""Validate and summarize generated report deliverables.""" + +from __future__ import annotations + +import argparse +from dataclasses import dataclass +from pathlib import Path + + +@dataclass(frozen=True) +class Deliverable: + label: str + compat_name: str + named_pattern: str + + +EXPECTED_DELIVERABLES: tuple[Deliverable, ...] = ( + Deliverable("Complete report", "report.pdf", "*_Complete_Report_*.pdf"), + Deliverable("Executive summary", "report_exec_summary.pdf", "*_Executive_Summary_Report_*.pdf"), + Deliverable("Backup settings", "report_backup_settings.pdf", "*_Backup_Settings_Report_*.pdf"), + Deliverable( + "Battery backup", + "report_battery_backup.pdf", + "*_Battery_Backup_Pricing_Calculation_Report_*.pdf", + ), + Deliverable("AP spectrum", "report_ap_spectrum.pdf", "*_AP_Spectrum_Report_*.pdf"), + Deliverable("UPS switch power plan", "ups_switch_power_plan.json", "*_UPS_Switch_Power_Plan_Report_*.json"), +) + + +@dataclass(frozen=True) +class InventoryResult: + org_dir: Path + present: tuple[Deliverable, ...] + missing: tuple[Deliverable, ...] + + @property + def ok(self) -> bool: + return not self.missing + + +def _has_named_alias(org_dir: Path, pattern: str) -> bool: + return any(path.is_file() for path in org_dir.glob(pattern)) + + +def inspect_org_dir(org_dir: Path) -> InventoryResult: + present: list[Deliverable] = [] + missing: list[Deliverable] = [] + + for deliverable in EXPECTED_DELIVERABLES: + compat_path = org_dir / deliverable.compat_name + if compat_path.is_file() and _has_named_alias(org_dir, deliverable.named_pattern): + present.append(deliverable) + else: + missing.append(deliverable) + + return InventoryResult(org_dir=org_dir, present=tuple(present), missing=tuple(missing)) + + +def inspect_reports_dir(reports_dir: Path) -> tuple[InventoryResult, ...]: + latest_dir = reports_dir / "latest" + if not latest_dir.is_dir(): + return () + + org_dirs = sorted(path for path in latest_dir.iterdir() if path.is_dir() and not path.name.startswith(".")) + return tuple(inspect_org_dir(org_dir) for org_dir in org_dirs) + + +def _fmt_size(path: Path) -> str: + try: + size = path.stat().st_size + except OSError: + return "unknown size" + if size >= 1024 * 1024: + return f"{size / (1024 * 1024):.1f} MB" + if size >= 1024: + return f"{size / 1024:.1f} KB" + return f"{size} B" + + +def print_inventory(results: tuple[InventoryResult, ...]) -> None: + for result in results: + print(f"{result.org_dir.name}: {len(result.present)}/{len(EXPECTED_DELIVERABLES)} expected deliverables") + for deliverable in result.present: + compat_path = result.org_dir / deliverable.compat_name + print(f" OK {deliverable.label}: {deliverable.compat_name} ({_fmt_size(compat_path)})") + for deliverable in result.missing: + print(f" MISSING {deliverable.label}: {deliverable.compat_name} and {deliverable.named_pattern}") + + +def main(argv: list[str] | None = None) -> int: + parser = argparse.ArgumentParser(description="Validate generated report deliverables.") + parser.add_argument( + "--reports-dir", + default="reports", + help="Reports directory containing latest// outputs. Default: reports", + ) + args = parser.parse_args(argv) + + reports_dir = Path(args.reports_dir).resolve() + latest_dir = reports_dir / "latest" + if not latest_dir.is_dir(): + print(f"No latest reports directory found: {latest_dir}") + return 1 + + results = inspect_reports_dir(reports_dir) + if not results: + print(f"No organization report directories found in {latest_dir}") + return 1 + + print_inventory(results) + if any(not result.ok for result in results): + return 1 + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/run.sh b/run.sh index 974d998..b17effc 100755 --- a/run.sh +++ b/run.sh @@ -310,6 +310,7 @@ STAGES=( "Merge Recommendations|merge_recommendations.py" "AI Review (Ollama)|ollama_review.py" "Generate Reports|report_generator.py" + "Report Inventory|report_inventory.py" ) TOTAL=${#STAGES[@]} TIMING_HISTORY_FILE="$(pwd)/backups/.stage_timings.json" @@ -494,6 +495,9 @@ run_stage() { extra_args+=("--pdf-only") fi fi + if [[ "$script" == "report_inventory.py" ]]; then + extra_args+=("--reports-dir" "$REPORTS_DIR") + fi "$PYTHON_BIN" "$script" "${extra_args[@]+"${extra_args[@]}"}" > "$tmp" 2>&1 local exit_code=$? diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py index e5542cf..2d09bf2 100644 --- a/tests/test_pipeline.py +++ b/tests/test_pipeline.py @@ -7,6 +7,7 @@ import merge_recommendations as mr import ollama_review as orv +from reporting import report_inventory from reporting import health @@ -251,6 +252,44 @@ def test_report_only_health_does_not_require_api_key(self, monkeypatch, tmp_path assert by_name["Org backups"].status == "ok" +class TestReportInventory: + def _write_expected_deliverables(self, org_dir: Path) -> None: + org_dir.mkdir(parents=True) + aliases = { + "report.pdf": "Demo_Org_Complete_Report_2026-05-02.pdf", + "report_exec_summary.pdf": "Demo_Org_Executive_Summary_Report_2026-05-02.pdf", + "report_backup_settings.pdf": "Demo_Org_Backup_Settings_Report_2026-05-02.pdf", + "report_battery_backup.pdf": "Demo_Org_Battery_Backup_Pricing_Calculation_Report_2026-05-02.pdf", + "report_ap_spectrum.pdf": "Demo_Org_AP_Spectrum_Report_2026-05-02.pdf", + "ups_switch_power_plan.json": "Demo_Org_UPS_Switch_Power_Plan_Report_2026-05-02.json", + } + for compat, named in aliases.items(): + (org_dir / compat).write_text("payload", encoding="utf-8") + (org_dir / named).write_text("payload", encoding="utf-8") + + def test_inventory_accepts_complete_latest_report_set(self, tmp_path, capsys): + org_dir = tmp_path / "reports" / "latest" / "Demo_Org" + self._write_expected_deliverables(org_dir) + + assert report_inventory.main(["--reports-dir", str(tmp_path / "reports")]) == 0 + + output = capsys.readouterr().out + assert "Demo_Org: 6/6 expected deliverables" in output + assert "Battery backup" in output + assert "UPS switch power plan" in output + + def test_inventory_fails_when_expected_report_is_missing(self, tmp_path, capsys): + org_dir = tmp_path / "reports" / "latest" / "Demo_Org" + self._write_expected_deliverables(org_dir) + (org_dir / "report_ap_spectrum.pdf").unlink() + + assert report_inventory.main(["--reports-dir", str(tmp_path / "reports")]) == 1 + + output = capsys.readouterr().out + assert "Demo_Org: 5/6 expected deliverables" in output + assert "MISSING AP spectrum" in output + + class TestReportingEntrypoint: def test_single_source_generation_writes_named_aliases(self, monkeypatch, tmp_path): from reporting import app From 22e6a7f6e1ce2594f03ab64c874407235fe14d6d Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 21:30:15 -0400 Subject: [PATCH 14/47] Write latest report inventory manifest --- README.md | 1 + ROADMAP.md | 3 +- reporting/report_inventory.py | 73 +++++++++++++++++++++++++++++++++-- tests/test_pipeline.py | 20 ++++++++++ 4 files changed, 93 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 848f9be..4bf9132 100644 --- a/README.md +++ b/README.md @@ -85,6 +85,7 @@ report generation, and a final deliverable inventory check. - `reports///SITE_NAME_AP_Spectrum_Report_YYYY-MM-DD.pdf` — run-specific AP spectrum and interference report - `reports///SITE_NAME_UPS_Switch_Power_Plan_Report_YYYY-MM-DD.json` — run-specific UPS sizing data - `reports/latest//report.pdf` — compatibility alias for the latest full report +- `reports/latest/report_inventory.json` — generated manifest of latest report deliverables and file sizes By default `run.sh` passes `--pdf-only`, so generated HTML is removed after PDFs are rendered. Use `./run.sh --keep-html` when HTML inspection is useful. diff --git a/ROADMAP.md b/ROADMAP.md index c54522e..50e2199 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -18,7 +18,7 @@ This project is currently functional as a Python reporting pipeline. The immedia - Deterministic report generation is available with `./run.sh --fixed-now ...`, `python -m reporting --fixed-now ...`, or `MERAKI_REPORT_FIXED_NOW`. - `./run.sh` remains the full default pipeline and now validates the generated latest - report deliverables after report generation. + report deliverables after report generation, including a latest report manifest. ## Phase 1: Stabilize The Existing Python App - Complete @@ -88,6 +88,7 @@ This project is currently functional as a Python reporting pipeline. The immedia - ~~Separate generated report deliverables into `reports/` and keep `backups/` focused on raw collection data.~~ - ~~Add PDF-only output mode so routine runs do not retain generated HTML unless requested.~~ - ~~Add a final report inventory check so missing generated deliverables fail the run visibly.~~ +- ~~Write `reports/latest/report_inventory.json` so the generated report set can be audited without browsing folders.~~ ## Phase 5: Optional Interfaces diff --git a/reporting/report_inventory.py b/reporting/report_inventory.py index 6025c93..6f822eb 100644 --- a/reporting/report_inventory.py +++ b/reporting/report_inventory.py @@ -3,7 +3,9 @@ from __future__ import annotations import argparse +import json from dataclasses import dataclass +from datetime import datetime, timezone from pathlib import Path @@ -39,8 +41,9 @@ def ok(self) -> bool: return not self.missing -def _has_named_alias(org_dir: Path, pattern: str) -> bool: - return any(path.is_file() for path in org_dir.glob(pattern)) +def _find_named_alias(org_dir: Path, pattern: str) -> Path | None: + matches = sorted(path for path in org_dir.glob(pattern) if path.is_file()) + return matches[-1] if matches else None def inspect_org_dir(org_dir: Path) -> InventoryResult: @@ -49,7 +52,7 @@ def inspect_org_dir(org_dir: Path) -> InventoryResult: for deliverable in EXPECTED_DELIVERABLES: compat_path = org_dir / deliverable.compat_name - if compat_path.is_file() and _has_named_alias(org_dir, deliverable.named_pattern): + if compat_path.is_file() and _find_named_alias(org_dir, deliverable.named_pattern): present.append(deliverable) else: missing.append(deliverable) @@ -78,6 +81,13 @@ def _fmt_size(path: Path) -> str: return f"{size} B" +def _size_bytes(path: Path) -> int | None: + try: + return path.stat().st_size + except OSError: + return None + + def print_inventory(results: tuple[InventoryResult, ...]) -> None: for result in results: print(f"{result.org_dir.name}: {len(result.present)}/{len(EXPECTED_DELIVERABLES)} expected deliverables") @@ -88,6 +98,57 @@ def print_inventory(results: tuple[InventoryResult, ...]) -> None: print(f" MISSING {deliverable.label}: {deliverable.compat_name} and {deliverable.named_pattern}") +def build_manifest(results: tuple[InventoryResult, ...], reports_dir: Path) -> dict: + latest_dir = reports_dir / "latest" + orgs = [] + for result in results: + deliverables = [] + for deliverable in EXPECTED_DELIVERABLES: + compat_path = result.org_dir / deliverable.compat_name + named_path = _find_named_alias(result.org_dir, deliverable.named_pattern) + present = compat_path.is_file() and named_path is not None + deliverables.append( + { + "label": deliverable.label, + "present": present, + "compatName": deliverable.compat_name, + "compatPath": str(compat_path) if compat_path.exists() else None, + "compatSizeBytes": _size_bytes(compat_path) if compat_path.exists() else None, + "namedPattern": deliverable.named_pattern, + "namedPath": str(named_path) if named_path else None, + "namedSizeBytes": _size_bytes(named_path) if named_path else None, + } + ) + orgs.append( + { + "org": result.org_dir.name, + "latestPath": str(result.org_dir), + "status": "ok" if result.ok else "missing", + "presentCount": len(result.present), + "expectedCount": len(EXPECTED_DELIVERABLES), + "deliverables": deliverables, + } + ) + + return { + "generatedAt": datetime.now(timezone.utc).isoformat(), + "reportsDir": str(reports_dir), + "latestDir": str(latest_dir), + "status": "ok" if all(result.ok for result in results) else "missing", + "orgCount": len(results), + "expectedDeliverables": [deliverable.label for deliverable in EXPECTED_DELIVERABLES], + "orgs": orgs, + } + + +def write_manifest(results: tuple[InventoryResult, ...], reports_dir: Path, manifest_path: Path | None = None) -> Path: + target = manifest_path or (reports_dir / "latest" / "report_inventory.json") + target.parent.mkdir(parents=True, exist_ok=True) + payload = build_manifest(results, reports_dir) + target.write_text(json.dumps(payload, indent=2) + "\n", encoding="utf-8") + return target + + def main(argv: list[str] | None = None) -> int: parser = argparse.ArgumentParser(description="Validate generated report deliverables.") parser.add_argument( @@ -95,6 +156,10 @@ def main(argv: list[str] | None = None) -> int: default="reports", help="Reports directory containing latest// outputs. Default: reports", ) + parser.add_argument( + "--manifest", + help="Optional manifest path. Default: /latest/report_inventory.json", + ) args = parser.parse_args(argv) reports_dir = Path(args.reports_dir).resolve() @@ -109,6 +174,8 @@ def main(argv: list[str] | None = None) -> int: return 1 print_inventory(results) + manifest_path = write_manifest(results, reports_dir, Path(args.manifest).resolve() if args.manifest else None) + print(f"Manifest: {manifest_path}") if any(not result.ok for result in results): return 1 return 0 diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py index 2d09bf2..b4340c1 100644 --- a/tests/test_pipeline.py +++ b/tests/test_pipeline.py @@ -1,6 +1,7 @@ import os import subprocess import sys +import json from pathlib import Path import pytest @@ -277,6 +278,16 @@ def test_inventory_accepts_complete_latest_report_set(self, tmp_path, capsys): assert "Demo_Org: 6/6 expected deliverables" in output assert "Battery backup" in output assert "UPS switch power plan" in output + assert "Manifest:" in output + + manifest = json.loads((tmp_path / "reports" / "latest" / "report_inventory.json").read_text(encoding="utf-8")) + assert manifest["status"] == "ok" + assert manifest["orgCount"] == 1 + assert manifest["orgs"][0]["presentCount"] == 6 + assert manifest["orgs"][0]["deliverables"][0]["compatName"] == "report.pdf" + assert manifest["orgs"][0]["deliverables"][0]["namedPath"].endswith( + "Demo_Org_Complete_Report_2026-05-02.pdf" + ) def test_inventory_fails_when_expected_report_is_missing(self, tmp_path, capsys): org_dir = tmp_path / "reports" / "latest" / "Demo_Org" @@ -289,6 +300,15 @@ def test_inventory_fails_when_expected_report_is_missing(self, tmp_path, capsys) assert "Demo_Org: 5/6 expected deliverables" in output assert "MISSING AP spectrum" in output + manifest = json.loads((tmp_path / "reports" / "latest" / "report_inventory.json").read_text(encoding="utf-8")) + assert manifest["status"] == "missing" + ap_spectrum = [ + item + for item in manifest["orgs"][0]["deliverables"] + if item["label"] == "AP spectrum" + ][0] + assert ap_spectrum["present"] is False + class TestReportingEntrypoint: def test_single_source_generation_writes_named_aliases(self, monkeypatch, tmp_path): From 37884eb3839c671c5824f0e5520e6835b60308be Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 21:35:18 -0400 Subject: [PATCH 15/47] Write latest report HTML index --- README.md | 1 + ROADMAP.md | 4 +- reporting/report_inventory.py | 108 ++++++++++++++++++++++++++++++++++ tests/test_pipeline.py | 8 +++ 4 files changed, 120 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 4bf9132..05e0516 100644 --- a/README.md +++ b/README.md @@ -86,6 +86,7 @@ report generation, and a final deliverable inventory check. - `reports///SITE_NAME_UPS_Switch_Power_Plan_Report_YYYY-MM-DD.json` — run-specific UPS sizing data - `reports/latest//report.pdf` — compatibility alias for the latest full report - `reports/latest/report_inventory.json` — generated manifest of latest report deliverables and file sizes +- `reports/latest/index.html` — generated report index with links to each latest deliverable By default `run.sh` passes `--pdf-only`, so generated HTML is removed after PDFs are rendered. Use `./run.sh --keep-html` when HTML inspection is useful. diff --git a/ROADMAP.md b/ROADMAP.md index 50e2199..56d4c7c 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -18,7 +18,8 @@ This project is currently functional as a Python reporting pipeline. The immedia - Deterministic report generation is available with `./run.sh --fixed-now ...`, `python -m reporting --fixed-now ...`, or `MERAKI_REPORT_FIXED_NOW`. - `./run.sh` remains the full default pipeline and now validates the generated latest - report deliverables after report generation, including a latest report manifest. + report deliverables after report generation, including a latest report manifest + and static HTML index. ## Phase 1: Stabilize The Existing Python App - Complete @@ -89,6 +90,7 @@ This project is currently functional as a Python reporting pipeline. The immedia - ~~Add PDF-only output mode so routine runs do not retain generated HTML unless requested.~~ - ~~Add a final report inventory check so missing generated deliverables fail the run visibly.~~ - ~~Write `reports/latest/report_inventory.json` so the generated report set can be audited without browsing folders.~~ +- ~~Write `reports/latest/index.html` as a static report index with links to each latest deliverable.~~ ## Phase 5: Optional Interfaces diff --git a/reporting/report_inventory.py b/reporting/report_inventory.py index 6f822eb..60c59f4 100644 --- a/reporting/report_inventory.py +++ b/reporting/report_inventory.py @@ -3,6 +3,7 @@ from __future__ import annotations import argparse +import html import json from dataclasses import dataclass from datetime import datetime, timezone @@ -149,6 +150,107 @@ def write_manifest(results: tuple[InventoryResult, ...], reports_dir: Path, mani return target +def _relative_href(path: Path, base_dir: Path) -> str: + try: + rel = path.relative_to(base_dir) + except ValueError: + rel = path + return html.escape(rel.as_posix(), quote=True) + + +def build_index_html(results: tuple[InventoryResult, ...], reports_dir: Path, generated_at: datetime | None = None) -> str: + latest_dir = reports_dir / "latest" + generated = generated_at or datetime.now(timezone.utc) + status = "OK" if all(result.ok for result in results) else "Missing deliverables" + org_sections = [] + for result in results: + rows = [] + for deliverable in EXPECTED_DELIVERABLES: + compat_path = result.org_dir / deliverable.compat_name + named_path = _find_named_alias(result.org_dir, deliverable.named_pattern) + present = compat_path.is_file() and named_path is not None + if present: + href = _relative_href(compat_path, latest_dir) + link = f'{html.escape(deliverable.compat_name)}' + named = html.escape(named_path.name if named_path else "") + size = _fmt_size(compat_path) + state = 'OK' + else: + link = html.escape(deliverable.compat_name) + named = html.escape(deliverable.named_pattern) + size = "-" + state = 'Missing' + rows.append( + "" + f"{html.escape(deliverable.label)}" + f"{state}" + f"{link}" + f"{named}" + f"{html.escape(size)}" + "" + ) + org_sections.append( + "
    " + f"

    {html.escape(result.org_dir.name)}

    " + f"

    {len(result.present)} of {len(EXPECTED_DELIVERABLES)} expected deliverables present.

    " + "" + "" + f"{''.join(rows)}" + "
    DeliverableStatusLatest AliasNamed FileSize
    " + "
    " + ) + + manifest_link = 'report_inventory.json' + return f""" + + + + + TM Meraki Report Inventory + + + +
    +
    +

    TM Meraki Report Inventory

    +
    + Status: {html.escape(status)} + Generated: {html.escape(generated.isoformat())} + Manifest: {manifest_link} +
    +
    + {''.join(org_sections)} +
    + + +""" + + +def write_index_html(results: tuple[InventoryResult, ...], reports_dir: Path, index_path: Path | None = None) -> Path: + target = index_path or (reports_dir / "latest" / "index.html") + target.parent.mkdir(parents=True, exist_ok=True) + target.write_text(build_index_html(results, reports_dir), encoding="utf-8") + return target + + def main(argv: list[str] | None = None) -> int: parser = argparse.ArgumentParser(description="Validate generated report deliverables.") parser.add_argument( @@ -160,6 +262,10 @@ def main(argv: list[str] | None = None) -> int: "--manifest", help="Optional manifest path. Default: /latest/report_inventory.json", ) + parser.add_argument( + "--index", + help="Optional HTML index path. Default: /latest/index.html", + ) args = parser.parse_args(argv) reports_dir = Path(args.reports_dir).resolve() @@ -176,6 +282,8 @@ def main(argv: list[str] | None = None) -> int: print_inventory(results) manifest_path = write_manifest(results, reports_dir, Path(args.manifest).resolve() if args.manifest else None) print(f"Manifest: {manifest_path}") + index_path = write_index_html(results, reports_dir, Path(args.index).resolve() if args.index else None) + print(f"Index: {index_path}") if any(not result.ok for result in results): return 1 return 0 diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py index b4340c1..1ca9bb6 100644 --- a/tests/test_pipeline.py +++ b/tests/test_pipeline.py @@ -279,6 +279,7 @@ def test_inventory_accepts_complete_latest_report_set(self, tmp_path, capsys): assert "Battery backup" in output assert "UPS switch power plan" in output assert "Manifest:" in output + assert "Index:" in output manifest = json.loads((tmp_path / "reports" / "latest" / "report_inventory.json").read_text(encoding="utf-8")) assert manifest["status"] == "ok" @@ -288,6 +289,10 @@ def test_inventory_accepts_complete_latest_report_set(self, tmp_path, capsys): assert manifest["orgs"][0]["deliverables"][0]["namedPath"].endswith( "Demo_Org_Complete_Report_2026-05-02.pdf" ) + index = (tmp_path / "reports" / "latest" / "index.html").read_text(encoding="utf-8") + assert "TM Meraki Report Inventory" in index + assert 'href="Demo_Org/report.pdf"' in index + assert "Demo_Org_Complete_Report_2026-05-02.pdf" in index def test_inventory_fails_when_expected_report_is_missing(self, tmp_path, capsys): org_dir = tmp_path / "reports" / "latest" / "Demo_Org" @@ -308,6 +313,9 @@ def test_inventory_fails_when_expected_report_is_missing(self, tmp_path, capsys) if item["label"] == "AP spectrum" ][0] assert ap_spectrum["present"] is False + index = (tmp_path / "reports" / "latest" / "index.html").read_text(encoding="utf-8") + assert "Missing deliverables" in index + assert 'Missing' in index class TestReportingEntrypoint: From 407780d47195e0932720ae2a552cbaf68657d18b Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 22:00:53 -0400 Subject: [PATCH 16/47] Refine AP power value guidance --- .../reference/meraki_hardware_catalog.json | 20 ++++ reporting/sections.py | 102 ++++++++++++++++-- tests/test_report.py | 78 +++++++++++++- 3 files changed, 188 insertions(+), 12 deletions(-) diff --git a/reporting/reference/meraki_hardware_catalog.json b/reporting/reference/meraki_hardware_catalog.json index 4aff944..84fa4a9 100644 --- a/reporting/reference/meraki_hardware_catalog.json +++ b/reporting/reference/meraki_hardware_catalog.json @@ -194,6 +194,11 @@ "sixGhzCapable": true, "bands": ["2.4", "5", "6"], "spatialStreams": 12, + "rfProfilePlanning": { + "defaultMaxPowerDbm": 26, + "bandMaxPowerDbm": {"2.4": 26, "5": 26, "6": 26}, + "basis": "CW9176I/CW9176D1 datasheet RF performance tables; validate local regulatory domain and site survey" + }, "source": "Cisco AP Capabilities / CW9176I Datasheet" }, "CW9176D1": { @@ -203,6 +208,11 @@ "sixGhzCapable": true, "bands": ["2.4", "5", "6"], "spatialStreams": 12, + "rfProfilePlanning": { + "defaultMaxPowerDbm": 26, + "bandMaxPowerDbm": {"2.4": 26, "5": 26, "6": 26}, + "basis": "CW9176I/CW9176D1 datasheet RF performance tables; validate local regulatory domain and site survey" + }, "source": "Cisco AP Capabilities / CW9176I Datasheet" }, "CW9163E": { @@ -228,6 +238,11 @@ "sixGhzCapable": false, "bands": ["2.4", "5"], "spatialStreams": 8, + "rfProfilePlanning": { + "defaultMaxPowerDbm": 26, + "bandMaxPowerDbm": {"2.4": 26, "5": 26}, + "basis": "MR86 datasheet RF performance tables; validate local regulatory domain and site survey" + }, "source": "Cisco AP Capabilities / MR86 Datasheet" }, "MR46": { @@ -237,6 +252,11 @@ "sixGhzCapable": false, "bands": ["2.4", "5"], "spatialStreams": 8, + "rfProfilePlanning": { + "defaultMaxPowerDbm": 26, + "bandMaxPowerDbm": {"2.4": 26, "5": 26}, + "basis": "MR46 datasheet RF performance tables; validate local regulatory domain and site survey" + }, "source": "Cisco AP Capabilities / MR46 Datasheet" }, "MR44": { diff --git a/reporting/sections.py b/reporting/sections.py index bd69487..eb4cf43 100644 --- a/reporting/sections.py +++ b/reporting/sections.py @@ -1058,6 +1058,26 @@ def _format_profile_power(profile: Dict[str, Any], band: str, exact: bool) -> st suffix = f"; {', '.join(details)}" if details else "" return f"Current RF profile: {name} ({source}); {min_text}; {max_text}{cap_note}{suffix}" + def _profile_power_values(ap: Dict[str, Any], band: str) -> Tuple[float | None, float | None]: + band_map = { + "2.4": "twoFourGhzSettings", + "5": "fiveGhzSettings", + "6": "sixGhzSettings", + } + profile, _, _ = _assigned_profile(ap) + if not isinstance(profile, dict): + return None, None + field = band_map.get(str(band)) + settings = profile.get(field) if field else None + if not isinstance(settings, dict): + return None, None + min_power = settings.get("minPower") + max_power = settings.get("maxPower") + return ( + float(min_power) if isinstance(min_power, (int, float)) else None, + float(max_power) if isinstance(max_power, (int, float)) else None, + ) + def _power_context(ap: Dict[str, Any], band: str) -> str: net_id = ap["network_id"] profile, exact, profile_name = _assigned_profile(ap) @@ -1113,6 +1133,46 @@ def _power_context(ap: Dict[str, Any], band: str) -> str: profile_note += f": {', '.join(names[:2])}{'…' if len(names) > 2 else ''}" return f"RF profile range; {min_text}; {max_text}{cap_note}{profile_note}" + def _model_power_target(ap: Dict[str, Any], band: str) -> str: + model = str(ap.get("model") or "") + ref = catalog_models.get(model) if isinstance(catalog_models, dict) else None + rf_target = ref.get("rfProfilePlanning") if isinstance(ref, dict) and isinstance(ref.get("rfProfilePlanning"), dict) else {} + band_targets = rf_target.get("bandMaxPowerDbm") if isinstance(rf_target.get("bandMaxPowerDbm"), dict) else {} + target = band_targets.get(str(band)) or rf_target.get("defaultMaxPowerDbm") + basis = str(rf_target.get("basis") or ref.get("source") or "model catalog") if isinstance(ref, dict) else "model catalog" + if isinstance(target, (int, float)): + return f"model planning ceiling {float(target):.0f} dBm ({basis})" + return "model-specific RF ceiling not in local catalog; use Meraki Auto RF with a site-survey-validated ceiling" + + def _legacy_or_old_standard(ap: Dict[str, Any]) -> bool: + cap = _ap_capability(ap) + model = str(ap.get("model") or "").upper() + if cap["generation"] in {"Wi-Fi 5", "Wi-Fi 5-era", "Legacy"}: + return True + return model.startswith(("MR16", "MR18", "MR20", "MR24", "MR26", "MR30H", "MR32", "MR33", "MR34", "MR42", "MR52", "MR53", "MR66", "MR70", "MR72", "MR74", "MR84")) + + def _low_power_value_note(ap: Dict[str, Any], band: str) -> str: + cap = _ap_capability(ap) + _, max_power = _profile_power_values(ap, band) + target = _model_power_target(ap, band) + if _legacy_or_old_standard(ap): + return ( + "This is an older-standard/EOL-candidate AP, so do not spend project time trying to recover value " + "by increasing transmit power. Prioritize removal, replacement, or decommissioning, then retest the RF domain." + ) + if cap["generation"] in {"Wi-Fi 7", "Wi-Fi 6E", "Wi-Fi 6"} and isinstance(max_power, (int, float)) and max_power <= 17: + return ( + "This modern AP is constrained by a low RF profile ceiling. Do not lower it further just because overlap is visible. " + f"To get value from the hardware, raise the profile ceiling toward the AP capability/Auto RF target ({target}), " + "then retest; if overlap remains, relocate/remove a redundant nearby AP rather than keeping this unit underpowered." + ) + if cap["generation"] in {"Wi-Fi 7", "Wi-Fi 6E", "Wi-Fi 6"}: + return ( + f"Treat this as a value-recovery check for a current-generation AP: keep Auto RF enabled with enough ceiling to use the hardware ({target}) " + "and solve confirmed overlap with placement/channel reuse instead of blanket power reduction." + ) + return "Validate RF profile power against Meraki Auto RF and a floor-plan survey before changing hardware or power settings." + def _profile_name(ap: Dict[str, Any]) -> str: _, exact, name = _assigned_profile(ap) return name if exact else f"{name} (fallback)" if name != "Profile assignment not captured" else name @@ -1211,7 +1271,11 @@ def _value_assessment(ap: Dict[str, Any]) -> str: if cap["generation"] in {"Wi-Fi 7", "Wi-Fi 6E", "Wi-Fi 6"} and severity["rank"] >= 4: points.append(f"Current {severity['label'].lower()} interference means the organization may not feel the value of this {cap['generation']} AP until RF is remediated.") if not cap["sixGhzCapable"] and cap["generation"] in {"Wi-Fi 5", "Wi-Fi 5-era", "Legacy", "Unknown"} and severity["rank"] >= 4: - points.append("Do not spend refresh money until RF noise/overlap is corrected; replacement hardware would inherit the same spectrum problem.") + points.append("Older-standard or EOL-candidate AP; prioritize replacement/removal instead of trying to tune more life out of it.") + if cap["generation"] in {"Wi-Fi 7", "Wi-Fi 6E", "Wi-Fi 6"}: + _, max_power = _profile_power_values(ap, ap.get("worst_band") or "") + if isinstance(max_power, (int, float)) and max_power <= 17: + points.append("Modern AP under low RF ceiling; recover value by allowing Auto RF more usable transmit-power headroom before deciding the AP is a bad fit.") if not points: points.append("No obvious hardware value blocker from this telemetry window.") if not profile_ctx["exact"]: @@ -1391,7 +1455,7 @@ def _ap_list(items: List[Dict[str, Any]], limit: int = 4) -> str: ) if high_pressure: executive_points.append( - f"Validate floor plans for {len(high_pressure)} AP(s) showing excessive co-channel pressure; removal, relocation, or lower transmit power may help more than adding hardware." + f"Validate floor plans for {len(high_pressure)} AP(s) showing excessive co-channel pressure. For modern APs on low power, first restore enough Auto RF headroom to get value from the hardware; for older/EOL APs, prioritize removal or replacement." ) if six_ghz_value_blocked: executive_points.append( @@ -1518,17 +1582,33 @@ def _recommendation(ap: Dict[str, Any]) -> str: + power ) if "WAY TOO CLOSE" in ap["bubble"]: + if _legacy_or_old_standard(ap): + return ( + "Treat this as a high-priority RF density and lifecycle problem. If the floor plan confirms another AP is physically close, remove or replace this older/EOL-candidate unit before adding more hardware. " + + power + ) + cap = _ap_capability(ap) + if cap["generation"] in {"Wi-Fi 7", "Wi-Fi 6E", "Wi-Fi 6"}: + return ( + "Treat this as a high-priority RF density problem. Because this is a current-generation AP class, do not solve cost/value concerns by underpowering it further. " + + _low_power_value_note(ap, ap["worst_band"]) + + " " + + power + ) return ( - "Treat this as a high-priority RF density problem. If the floor plan confirms " - "another AP is physically close, remove, disable, or relocate one AP before " - "adding replacement Wi-Fi 6/7 hardware. " + "Treat this as a high-priority RF density problem. Validate the model lifecycle before changing transmit power; if overlap is confirmed, prioritize placement or replacement over blanket power reduction. " + power ) if "Too close" in ap["bubble"]: + if _legacy_or_old_standard(ap): + return ( + "Review nearby AP placement and channel reuse, but treat this older/EOL-candidate AP as a replacement/removal candidate rather than spending time optimizing low-value hardware. " + + power + ) return ( - "Review nearby AP placement, channel reuse, and transmit power. If this AP is already " - "running under a reduced power profile, removal or relocation is more likely to help " - "than increasing power. " + "Review nearby AP placement and channel reuse. If this AP is already running under a reduced power profile, do not lower it further; recover hardware value first, then remove/relocate redundant APs if overlap persists. " + + _low_power_value_note(ap, ap["worst_band"]) + + " " + power ) if stats.get("non_wifi", 0.0) >= 15: @@ -1574,11 +1654,11 @@ def _priority_action(ap: Dict[str, Any]) -> str: if stats.get("non_wifi", 0.0) >= 25: return "Find/remove RF noise source; retest before AP replacement. " + value + " " + power if "WAY TOO CLOSE" in ap["bubble"]: - return "Validate floor plan; remove, disable, or relocate one AP if physical overlap is confirmed. " + value + " " + power + return "Validate floor plan; for modern APs recover value by restoring Auto RF headroom, and for old/EOL units remove or replace the low-value AP if physical overlap is confirmed. " + value + " " + _low_power_value_note(ap, ap["worst_band"]) + " " + power if "Too close" in ap["bubble"]: - return "Tune channel reuse and power; consider relocation/removal if profile is already constrained. " + value + " " + power + return "Tune channel reuse and placement; do not recommend lower power when a modern AP is already constrained. " + value + " " + _low_power_value_note(ap, ap["worst_band"]) + " " + power if "Tight" in ap["bubble"]: - return "Tune profile/channel width before one-for-one refresh. " + value + " " + power + return "Tune profile/channel width before one-for-one refresh; for modern low-power APs restore value with Auto RF headroom before removal decisions. " + value + " " + _low_power_value_note(ap, ap["worst_band"]) + " " + power return "Monitor; no immediate RF remediation from this telemetry." severity_queue = sorted( diff --git a/tests/test_report.py b/tests/test_report.py index 54711f3..492087e 100644 --- a/tests/test_report.py +++ b/tests/test_report.py @@ -939,13 +939,89 @@ def test_ap_spectrum_report_variant_renders_one_page_per_ap(self, tmp_path): assert "WAY TOO CLOSE / saturated RF bubble" in html assert "Same-Band Context / Overlap Candidates" in html assert "Current RF profile: Classroom Low Power (exact AP assignment)" in html - assert "remove, disable, or relocate one AP" in html + assert "recover value by restoring Auto RF headroom" in html + assert "Do not lower it further just because overlap is visible" in html + assert "model planning ceiling 26 dBm" in html + assert "removal, relocation, or lower transmit power may help" not in html assert "Interference Severity Queue" in html assert "RF / Hardware Fit" in html assert "Wi-Fi 6 / 802.11ax / 2.4, 5 GHz" in html assert "Current severe interference means the organization may not feel the value of this Wi-Fi 6 AP until RF is remediated" in html assert "Network Overview" not in html + def test_ap_spectrum_recommends_replacing_old_standard_aps_instead_of_power_tuning(self, tmp_path): + from reporting.app import build_org_report + + for fn in os.listdir(FIXTURES): + src = os.path.join(FIXTURES, fn) + dst = tmp_path / fn + if os.path.isfile(src): + shutil.copy(src, dst) + + devices = json.loads((tmp_path / "devices_availabilities.json").read_text(encoding="utf-8")) + devices.append( + { + "serial": "Q2AP-OLD-0001", + "name": "Legacy-AP-01", + "productType": "wireless", + "model": "MR42", + "status": "online", + "networkId": "N_test_001", + } + ) + (tmp_path / "devices_availabilities.json").write_text(json.dumps(devices), encoding="utf-8") + + (tmp_path / "channel_utilization_by_device.json").write_text( + json.dumps( + [ + { + "serial": "Q2AP-OLD-0001", + "network": {"id": "N_test_001"}, + "byBand": [ + { + "band": "5", + "wifi": {"percentage": 66}, + "nonWifi": {"percentage": 1}, + "total": {"percentage": 82}, + } + ], + } + ] + ), + encoding="utf-8", + ) + (tmp_path / "wireless_rf_profiles.json").write_text( + json.dumps( + { + "N_test_001": [ + { + "id": "rf-low", + "name": "Low Legacy Power", + "fiveGhzSettings": {"minPower": 8, "maxPower": 14}, + } + ] + } + ), + encoding="utf-8", + ) + (tmp_path / "wireless_rf_profile_assignments.json").write_text( + json.dumps( + [ + { + "serial": "Q2AP-OLD-0001", + "rfProfile": {"id": "rf-low", "name": "Low Legacy Power"}, + } + ] + ), + encoding="utf-8", + ) + + html = build_org_report(str(tmp_path), "AP Legacy Test", report_kind="ap_spectrum") + assert "Wi-Fi 5 / 802.11ac Wave 2 / 2.4, 5 GHz" in html + assert "older-standard/EOL-candidate AP" in html + assert "Prioritize removal, replacement, or decommissioning" in html + assert "trying to recover value by increasing transmit power" in html + def test_ap_spectrum_surfaces_channel_utilization_collection_error(self, tmp_path): from reporting.app import build_org_report From dc05732382addd9af793abc0750cf751e19742a4 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:02:57 -0400 Subject: [PATCH 17/47] Add separate UniFi reporting runner --- .gitignore | 2 + ROADMAP.md | 11 ++ tests/test_unifi_report.py | 60 +++++++ unifi/.env.example | 12 ++ unifi/README.md | 47 ++++++ unifi/__init__.py | 2 + unifi/client.py | 120 ++++++++++++++ unifi/collect.py | 228 ++++++++++++++++++++++++++ unifi/env.py | 20 +++ unifi/health.py | 49 ++++++ unifi/inventory.py | 45 +++++ unifi/report.py | 325 +++++++++++++++++++++++++++++++++++++ unifi/run.sh | 181 +++++++++++++++++++++ 13 files changed, 1102 insertions(+) create mode 100644 tests/test_unifi_report.py create mode 100644 unifi/.env.example create mode 100644 unifi/README.md create mode 100644 unifi/__init__.py create mode 100644 unifi/client.py create mode 100644 unifi/collect.py create mode 100644 unifi/env.py create mode 100644 unifi/health.py create mode 100644 unifi/inventory.py create mode 100644 unifi/report.py create mode 100755 unifi/run.sh diff --git a/.gitignore b/.gitignore index c14f4fd..a33611d 100644 --- a/.gitignore +++ b/.gitignore @@ -76,6 +76,8 @@ __pypackages__/ backups/ reports/ +unifi/backups/ +unifi/reports/ meraki_backup_*/ meraki_backup_sample_*/ */report.pdf diff --git a/ROADMAP.md b/ROADMAP.md index 56d4c7c..98a1f43 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -98,6 +98,17 @@ This project is currently functional as a Python reporting pipeline. The immedia - If desired later, add a minimal `package.json` as a command wrapper only. - Keep Python as the source of truth for Meraki collection, report generation, and tests. +## Phase 6: UniFi / Ubiquiti Reporting - Started + +- Add a separate `./unifi/run.sh` runner so UniFi work does not regress the + Meraki pipeline. +- Support both official Site Manager API collection and local UniFi Network + Application Integration API collection. +- Save raw UniFi JSON backups separately under `unifi/backups/`. +- Generate a first-pass UniFi baseline report under `unifi/reports/`. +- Treat local Network Application endpoint gaps as reportable coverage findings + while we learn the exact controller version and API surface. + ## Release Checklist - Run `./install.sh`. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py new file mode 100644 index 0000000..3ccc0aa --- /dev/null +++ b/tests/test_unifi_report.py @@ -0,0 +1,60 @@ +import json +from pathlib import Path + +from unifi.report import build_report + + +def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): + source = tmp_path / "backup" + site_dir = source / "sites" / "Main" + site_dir.mkdir(parents=True) + (source / "collection_summary.json").write_text( + json.dumps( + { + "metadata": {"requestedMode": "network", "effectiveMode": "network", "collectedAt": "2026-05-05T12:00:00"}, + "networkApplication": {"enabled": True, "files": {"site_summaries": "network_site_summaries.json"}, "errors": []}, + } + ), + encoding="utf-8", + ) + (source / "network_site_summaries.json").write_text( + json.dumps( + [ + { + "id": "site-1", + "name": "Main", + "files": { + "devices": "sites/Main/devices.json", + "clients": "sites/Main/clients.json", + "networks": "sites/Main/networks.json", + "wifi": "sites/Main/wifi.json", + "firewall_zones": "sites/Main/firewall_zones.json", + }, + } + ] + ), + encoding="utf-8", + ) + (site_dir / "devices.json").write_text( + json.dumps( + [ + {"name": "U7-Pro-1", "model": "U7-Pro", "type": "access point", "state": "ONLINE", "ipAddress": "10.1.1.10"}, + {"name": "USW-48", "model": "USW-Pro-48-PoE", "type": "switch", "state": "ONLINE", "ipAddress": "10.1.1.20"}, + ] + ), + encoding="utf-8", + ) + (site_dir / "clients.json").write_text(json.dumps([{"hostname": "client-1", "ipAddress": "10.10.0.50"}]), encoding="utf-8") + (site_dir / "networks.json").write_text(json.dumps([{"name": "Staff", "vlanId": 100, "subnet": "10.100.0.0/16", "dhcpMode": "server"}]), encoding="utf-8") + (site_dir / "wifi.json").write_text(json.dumps([{"name": "Staff WiFi", "enabled": True, "security": "WPA3"}]), encoding="utf-8") + (site_dir / "firewall_zones.json").write_text(json.dumps([{"name": "Internal", "id": "zone-1"}]), encoding="utf-8") + + output = tmp_path / "report" + paths = build_report(str(source), str(output)) + + html = Path(paths["html"]).read_text(encoding="utf-8") + assert "TM UniFi Baseline" in html + assert "U7-Pro-1" in html + assert "USW-48" in html + assert "Staff WiFi" in html + assert "Firewall Zones" in html diff --git a/unifi/.env.example b/unifi/.env.example new file mode 100644 index 0000000..c284c02 --- /dev/null +++ b/unifi/.env.example @@ -0,0 +1,12 @@ +# Cloud Site Manager API +# UNIFI_SITE_MANAGER_API_KEY= + +# Local UniFi Network Application Integration API +# UNIFI_NETWORK_BASE_URL=https://192.168.1.1 +# UNIFI_NETWORK_API_KEY= +# UNIFI_VERIFY_SSL=0 + +# Optional +# UNIFI_COLLECTION_MODE=auto +# UNIFI_SITE_ID= +# UNIFI_REQUEST_TIMEOUT=30 diff --git a/unifi/README.md b/unifi/README.md new file mode 100644 index 0000000..8bbad5e --- /dev/null +++ b/unifi/README.md @@ -0,0 +1,47 @@ +# TM UniFi Baseline Runner + +`./unifi/run.sh` is a separate UniFi/Ubiquiti reporting pipeline. It does not +modify or call the Meraki runner. + +## API Modes + +- `site-manager`: uses the official cloud Site Manager API at `https://api.ui.com/v1`. +- `network`: uses the local UniFi Network Application Integration API under + `/proxy/network/integration/v1`. +- `both`: collects both surfaces. +- `auto`: default. Uses the configured surface(s). + +## Configuration + +Use exported environment variables, root `.env`, or `unifi/.env`. + +```sh +# Cloud Site Manager API +UNIFI_SITE_MANAGER_API_KEY=... + +# Local Network Application API +UNIFI_NETWORK_BASE_URL=https://192.168.1.1 +UNIFI_NETWORK_API_KEY=... +UNIFI_VERIFY_SSL=0 +``` + +For the local Network Application API, create an API key in UniFi Network under +Settings > Control Plane > Integrations. Ubiquiti says the local Network API +documentation is specific to the installed Network version, so the collector +saves endpoint errors instead of failing the whole run when an endpoint is not +available on a given controller. + +## Commands + +```sh +./unifi/run.sh +./unifi/run.sh --mode network --no-open +./unifi/run.sh --report-only --keep-html --no-open +./unifi/run.sh --health-check +``` + +Outputs are written to: + +- `unifi/backups/latest/` for raw JSON backups +- `unifi/reports/latest/` for `report.pdf`, `report.html`, and inventory data + diff --git a/unifi/__init__.py b/unifi/__init__.py new file mode 100644 index 0000000..0ecaa58 --- /dev/null +++ b/unifi/__init__.py @@ -0,0 +1,2 @@ +"""UniFi reporting pipeline package.""" + diff --git a/unifi/client.py b/unifi/client.py new file mode 100644 index 0000000..8be1df2 --- /dev/null +++ b/unifi/client.py @@ -0,0 +1,120 @@ +import json +import ssl +import time +import urllib.error +import urllib.parse +import urllib.request +from typing import Any, Dict, List, Optional + + +class UniFiRequestError(RuntimeError): + def __init__(self, message: str, status: Optional[int] = None, body: str = "") -> None: + super().__init__(message) + self.status = status + self.body = body + + +class UniFiClient: + def __init__( + self, + base_url: str, + api_key: str, + *, + timeout: int = 30, + verify_ssl: bool = True, + courtesy_delay: float = 0.1, + ) -> None: + self.base_url = base_url.rstrip("/") + self.api_key = api_key + self.timeout = timeout + self.verify_ssl = verify_ssl + self.courtesy_delay = courtesy_delay + + def _url(self, path: str, params: Optional[Dict[str, Any]] = None) -> str: + if path.startswith("http://") or path.startswith("https://"): + url = path + else: + url = f"{self.base_url}/{path.lstrip('/')}" + if params: + clean = {k: v for k, v in params.items() if v is not None} + if clean: + url = f"{url}?{urllib.parse.urlencode(clean, doseq=True)}" + return url + + def get_json(self, path: str, params: Optional[Dict[str, Any]] = None) -> Any: + url = self._url(path, params) + req = urllib.request.Request( + url, + method="GET", + headers={ + "Accept": "application/json", + "X-API-Key": self.api_key, + }, + ) + context = None if self.verify_ssl else ssl._create_unverified_context() + try: + with urllib.request.urlopen(req, timeout=self.timeout, context=context) as resp: + raw = resp.read().decode("utf-8") + return json.loads(raw) if raw else None + except urllib.error.HTTPError as e: + body = e.read().decode("utf-8", errors="replace") if e.fp else "" + raise UniFiRequestError(f"HTTP {e.code} for {url}: {body[:500]}", status=e.code, body=body) + except urllib.error.URLError as e: + raise UniFiRequestError(f"Network error for {url}: {e}") + + @staticmethod + def unwrap(payload: Any) -> Any: + if isinstance(payload, dict) and "data" in payload: + return payload.get("data") + return payload + + def paged_get( + self, + path: str, + *, + params: Optional[Dict[str, Any]] = None, + style: str = "offset", + limit: int = 200, + ) -> List[Any]: + """Fetch list endpoints supporting either offset/limit or nextToken pagination.""" + params = dict(params or {}) + items: List[Any] = [] + + if style == "nextToken": + params.setdefault("pageSize", limit) + next_token: Optional[str] = params.get("nextToken") + while True: + if next_token: + params["nextToken"] = next_token + payload = self.get_json(path, params) + data = self.unwrap(payload) + if isinstance(data, list): + items.extend(data) + elif data is not None: + items.append(data) + next_token = payload.get("nextToken") if isinstance(payload, dict) else None + time.sleep(self.courtesy_delay) + if not next_token: + return items + + offset = int(params.get("offset") or 0) + params.setdefault("limit", limit) + while True: + params["offset"] = offset + payload = self.get_json(path, params) + data = self.unwrap(payload) + batch = data if isinstance(data, list) else ([] if data is None else [data]) + items.extend(batch) + + total = payload.get("totalCount") if isinstance(payload, dict) else None + count = payload.get("count") if isinstance(payload, dict) else len(batch) + if isinstance(total, int) and offset + int(count or 0) < total: + offset += int(params["limit"]) + time.sleep(self.courtesy_delay) + continue + if len(batch) >= int(params["limit"]) and total is None: + offset += int(params["limit"]) + time.sleep(self.courtesy_delay) + continue + return items + diff --git a/unifi/collect.py b/unifi/collect.py new file mode 100644 index 0000000..4b1424a --- /dev/null +++ b/unifi/collect.py @@ -0,0 +1,228 @@ +#!/usr/bin/env python3 +import argparse +import json +import os +import sys +from datetime import datetime +from pathlib import Path +from typing import Any, Dict, Iterable, List, Tuple + +from .client import UniFiClient, UniFiRequestError +from .env import load_env + + +ROOT = Path(__file__).resolve().parents[1] +NETWORK_PREFIX = "/proxy/network/integration/v1" +SOURCE_NOTES = [ + { + "name": "Official UniFi API overview", + "url": "https://help.ui.com/hc/en-us/articles/30076656117655-Getting-Started-with-the-Official-UniFi-API", + "note": "Ubiquiti documents Site Manager and local application APIs as separate surfaces.", + }, + { + "name": "Official Site Manager API", + "url": "https://developer.ui.com/site-manager-api/", + "note": "Cloud API for high-level host, site, device, ISP, and SD-WAN visibility.", + }, +] + + +def _write_json(path: Path, payload: Any) -> None: + path.parent.mkdir(parents=True, exist_ok=True) + path.write_text(json.dumps(payload, indent=2, sort_keys=True), encoding="utf-8") + + +def _bool_env(name: str, default: bool = True) -> bool: + raw = os.getenv(name) + if raw is None: + return default + return raw.strip().lower() not in {"0", "false", "no", "off"} + + +def _safe_name(value: str) -> str: + clean = "".join(ch if ch.isalnum() or ch in "-_" else "_" for ch in value.strip()) + return clean.strip("_") or "site" + + +def _items(payload: Any) -> List[Dict[str, Any]]: + if isinstance(payload, list): + return [x for x in payload if isinstance(x, dict)] + if isinstance(payload, dict): + data = payload.get("data") + if isinstance(data, list): + return [x for x in data if isinstance(x, dict)] + return [] + + +def _site_id(site: Dict[str, Any]) -> str: + return str(site.get("id") or site.get("siteId") or site.get("_id") or site.get("internalReference") or "") + + +def _site_name(site: Dict[str, Any]) -> str: + meta = site.get("meta") if isinstance(site.get("meta"), dict) else {} + return str(site.get("name") or meta.get("name") or site.get("description") or _site_id(site) or "Default") + + +def _call_list(client: UniFiClient, path: str, *, style: str, label: str, errors: List[Dict[str, Any]]) -> List[Any]: + try: + return client.paged_get(path, style=style) + except UniFiRequestError as exc: + errors.append({"label": label, "path": path, "status": exc.status, "error": str(exc)}) + except Exception as exc: + errors.append({"label": label, "path": path, "status": None, "error": str(exc)}) + return [] + + +def collect_site_manager(output: Path) -> Dict[str, Any]: + api_key = os.getenv("UNIFI_SITE_MANAGER_API_KEY") or os.getenv("UNIFI_API_KEY") + if not api_key: + return {"enabled": False, "reason": "UNIFI_SITE_MANAGER_API_KEY or UNIFI_API_KEY is not set"} + + client = UniFiClient( + os.getenv("UNIFI_SITE_MANAGER_BASE_URL", "https://api.ui.com"), + api_key, + timeout=int(os.getenv("UNIFI_REQUEST_TIMEOUT", "30")), + verify_ssl=True, + ) + errors: List[Dict[str, Any]] = [] + endpoints = { + "hosts": "/v1/hosts", + "sites": "/v1/sites", + "devices": "/v1/devices", + "sd_wan_configs": "/v1/sd-wan-configs", + } + summary: Dict[str, Any] = {"enabled": True, "baseUrl": client.base_url, "files": {}, "counts": {}, "errors": errors} + for label, path in endpoints.items(): + data = _call_list(client, path, style="nextToken", label=f"site_manager_{label}", errors=errors) + rel = f"site_manager_{label}.json" + _write_json(output / rel, data) + summary["files"][label] = rel + summary["counts"][label] = len(data) + return summary + + +def collect_network_application(output: Path, selected_site_id: str = "") -> Dict[str, Any]: + api_key = os.getenv("UNIFI_NETWORK_API_KEY") or os.getenv("UNIFI_API_KEY") + base_url = os.getenv("UNIFI_NETWORK_BASE_URL") or os.getenv("UNIFI_BASE_URL") + if not api_key or not base_url: + return {"enabled": False, "reason": "UNIFI_NETWORK_BASE_URL and UNIFI_NETWORK_API_KEY are not set"} + + client = UniFiClient( + base_url, + api_key, + timeout=int(os.getenv("UNIFI_REQUEST_TIMEOUT", "30")), + verify_ssl=_bool_env("UNIFI_VERIFY_SSL", False), + ) + errors: List[Dict[str, Any]] = [] + summary: Dict[str, Any] = { + "enabled": True, + "baseUrl": client.base_url, + "verifySsl": client.verify_ssl, + "files": {}, + "counts": {}, + "errors": errors, + } + + try: + info = client.get_json(f"{NETWORK_PREFIX}/info") + except UniFiRequestError as exc: + info = {"error": str(exc), "status": exc.status} + errors.append({"label": "network_info", "path": f"{NETWORK_PREFIX}/info", "status": exc.status, "error": str(exc)}) + _write_json(output / "network_info.json", info) + summary["files"]["info"] = "network_info.json" + + sites = _call_list(client, f"{NETWORK_PREFIX}/sites", style="offset", label="network_sites", errors=errors) + if selected_site_id: + sites = [site for site in _items(sites) if _site_id(site) == selected_site_id] + _write_json(output / "network_sites.json", sites) + summary["files"]["sites"] = "network_sites.json" + summary["counts"]["sites"] = len(sites) + + site_endpoints: Iterable[Tuple[str, str]] = ( + ("devices", "devices"), + ("clients", "clients"), + ("networks", "networks"), + ("wifi", "wifi"), + ("hotspot_vouchers", "hotspot/vouchers"), + ("firewall_zones", "firewall/zones"), + ("firewall_policies", "firewall/policies"), + ("acl_rules", "acl-rules"), + ("traffic_lists", "traffic-lists"), + ("wans", "wans"), + ("vpn_servers", "vpn-servers"), + ("vpn_tunnels", "vpn-tunnels"), + ("radius", "radius"), + ("dns_policies", "dns/policies"), + ) + + site_summaries: List[Dict[str, Any]] = [] + for site in _items(sites): + sid = _site_id(site) + if not sid and selected_site_id: + sid = selected_site_id + name = _site_name(site) + safe = _safe_name(name or sid) + site_summary: Dict[str, Any] = {"id": sid, "name": name, "files": {}, "counts": {}} + for label, suffix in site_endpoints: + path = f"{NETWORK_PREFIX}/sites/{sid}/{suffix}" + data = _call_list(client, path, style="offset", label=f"{name}:{label}", errors=errors) + rel = f"sites/{safe}/{label}.json" + _write_json(output / rel, data) + site_summary["files"][label] = rel + site_summary["counts"][label] = len(data) + site_summaries.append(site_summary) + + _write_json(output / "network_site_summaries.json", site_summaries) + summary["files"]["site_summaries"] = "network_site_summaries.json" + summary["siteSummaries"] = site_summaries + return summary + + +def main(argv: List[str] | None = None) -> int: + parser = argparse.ArgumentParser(description="Collect UniFi Site Manager and Network Application data.") + parser.add_argument("--mode", choices=["auto", "site-manager", "network", "both"], default=os.getenv("UNIFI_COLLECTION_MODE", "auto")) + parser.add_argument("--site-id", default=os.getenv("UNIFI_SITE_ID", "")) + parser.add_argument("--output-dir", default=str(ROOT / "unifi" / "backups" / "latest")) + args = parser.parse_args(argv) + + load_env() + output = Path(args.output_dir) + output.mkdir(parents=True, exist_ok=True) + + mode = args.mode + if mode == "auto": + has_network = bool((os.getenv("UNIFI_NETWORK_API_KEY") or os.getenv("UNIFI_API_KEY")) and (os.getenv("UNIFI_NETWORK_BASE_URL") or os.getenv("UNIFI_BASE_URL"))) + has_site_manager = bool(os.getenv("UNIFI_SITE_MANAGER_API_KEY") or os.getenv("UNIFI_API_KEY")) + if has_network and has_site_manager: + mode = "both" + elif has_network: + mode = "network" + elif has_site_manager: + mode = "site-manager" + else: + print("Missing UniFi API configuration.", file=sys.stderr) + print("Set UNIFI_NETWORK_BASE_URL + UNIFI_NETWORK_API_KEY, or UNIFI_SITE_MANAGER_API_KEY.", file=sys.stderr) + return 1 + + metadata: Dict[str, Any] = { + "collectedAt": datetime.now().isoformat(timespec="seconds"), + "requestedMode": args.mode, + "effectiveMode": mode, + "sourceNotes": SOURCE_NOTES, + "siteIdFilter": args.site_id or None, + } + summary: Dict[str, Any] = {"metadata": metadata} + if mode in {"site-manager", "both"}: + summary["siteManager"] = collect_site_manager(output) + if mode in {"network", "both"}: + summary["networkApplication"] = collect_network_application(output, args.site_id) + + _write_json(output / "collection_summary.json", summary) + print(f"Collected UniFi data into {output}") + print(json.dumps(summary, indent=2)) + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) + diff --git a/unifi/env.py b/unifi/env.py new file mode 100644 index 0000000..6314cae --- /dev/null +++ b/unifi/env.py @@ -0,0 +1,20 @@ +import os +from pathlib import Path + + +def load_env() -> None: + """Load root and UniFi-local .env files without overriding exported values.""" + root = Path(__file__).resolve().parents[1] + for path in (root / ".env", root / "unifi" / ".env"): + if not path.exists(): + continue + for raw in path.read_text(encoding="utf-8").splitlines(): + line = raw.strip() + if not line or line.startswith("#") or "=" not in line: + continue + key, value = line.split("=", 1) + key = key.strip() + value = value.strip().strip('"').strip("'") + if key and key not in os.environ: + os.environ[key] = value + diff --git a/unifi/health.py b/unifi/health.py new file mode 100644 index 0000000..ed334b4 --- /dev/null +++ b/unifi/health.py @@ -0,0 +1,49 @@ +#!/usr/bin/env python3 +import os +import sys +import argparse +from pathlib import Path + +from .env import load_env + + +ROOT = Path(__file__).resolve().parents[1] + + +def main(argv: list[str] | None = None) -> int: + parser = argparse.ArgumentParser(description="Validate UniFi reporting environment.") + parser.add_argument("--report-only", action="store_true") + parser.add_argument("--backups-dir", default=str(ROOT / "unifi" / "backups" / "latest")) + args = parser.parse_args(argv) + + load_env() + failures = 0 + print(f"Python: {sys.version.split()[0]}") + + site_manager = bool(os.getenv("UNIFI_SITE_MANAGER_API_KEY") or os.getenv("UNIFI_API_KEY")) + network = bool((os.getenv("UNIFI_NETWORK_API_KEY") or os.getenv("UNIFI_API_KEY")) and (os.getenv("UNIFI_NETWORK_BASE_URL") or os.getenv("UNIFI_BASE_URL"))) + print(f"Site Manager API config: {'ok' if site_manager else 'missing'}") + print(f"Network Application API config: {'ok' if network else 'missing'}") + + if args.report_only: + backup_summary = Path(args.backups_dir) / "collection_summary.json" + if backup_summary.exists(): + print(f"Existing UniFi backup: ok ({backup_summary})") + else: + failures += 1 + print(f"Existing UniFi backup: missing ({backup_summary})") + elif not site_manager and not network: + failures += 1 + print("Set either UNIFI_SITE_MANAGER_API_KEY or UNIFI_NETWORK_BASE_URL + UNIFI_NETWORK_API_KEY.") + + try: + import weasyprint # noqa: F401 + + print("PDF renderer: weasyprint") + except Exception: + print("PDF renderer: unavailable; report.html will still be generated") + return failures + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/unifi/inventory.py b/unifi/inventory.py new file mode 100644 index 0000000..cc2ceac --- /dev/null +++ b/unifi/inventory.py @@ -0,0 +1,45 @@ +#!/usr/bin/env python3 +import argparse +import json +from pathlib import Path +from typing import Dict, List + + +ROOT = Path(__file__).resolve().parents[1] + + +def main() -> int: + parser = argparse.ArgumentParser(description="Validate UniFi report outputs.") + parser.add_argument("--reports-dir", default=str(ROOT / "unifi" / "reports" / "latest")) + parser.add_argument("--backups-dir", default=str(ROOT / "unifi" / "backups" / "latest")) + args = parser.parse_args() + + reports = Path(args.reports_dir) + backups = Path(args.backups_dir) + checks = [ + ("collection_summary", backups / "collection_summary.json", True), + ("report_html", reports / "report.html", False), + ("report_pdf", reports / "report.pdf", False), + ] + items: List[Dict[str, object]] = [] + failed = False + for label, path, required in checks: + exists = path.exists() + size = path.stat().st_size if exists else 0 + ok = exists and size > 0 + if required and not ok: + failed = True + items.append({"label": label, "path": str(path), "exists": exists, "size": size, "required": required, "ok": ok}) + + manifest = {"items": items, "ok": not failed} + reports.mkdir(parents=True, exist_ok=True) + (reports / "report_inventory.json").write_text(json.dumps(manifest, indent=2), encoding="utf-8") + for item in items: + status = "OK" if item["ok"] else ("MISS" if item["required"] else "optional") + print(f"{status} {item['label']}: {item['path']}") + return 1 if failed else 0 + + +if __name__ == "__main__": + raise SystemExit(main()) + diff --git a/unifi/report.py b/unifi/report.py new file mode 100644 index 0000000..05b810e --- /dev/null +++ b/unifi/report.py @@ -0,0 +1,325 @@ +#!/usr/bin/env python3 +import argparse +import html +import json +import os +import shutil +import subprocess +from datetime import datetime +from pathlib import Path +from typing import Any, Dict, Iterable, List + + +ROOT = Path(__file__).resolve().parents[1] + + +def _load_json(path: Path, default: Any) -> Any: + try: + return json.loads(path.read_text(encoding="utf-8")) + except Exception: + return default + + +def _items(value: Any) -> List[Dict[str, Any]]: + if isinstance(value, list): + return [item for item in value if isinstance(item, dict)] + if isinstance(value, dict) and isinstance(value.get("data"), list): + return [item for item in value["data"] if isinstance(item, dict)] + return [] + + +def _first(item: Dict[str, Any], keys: Iterable[str], default: str = "") -> str: + for key in keys: + value = item.get(key) + if value is not None and value != "": + return str(value) + return default + + +def _nested(item: Dict[str, Any], path: Iterable[str], default: str = "") -> str: + cur: Any = item + for key in path: + if not isinstance(cur, dict): + return default + cur = cur.get(key) + return str(cur) if cur not in (None, "") else default + + +def _device_role(device: Dict[str, Any]) -> str: + raw = " ".join(str(device.get(k, "")) for k in ("type", "model", "modelName", "name", "displayName")).lower() + if any(token in raw for token in ("access point", "uap", "u7", "u6", "ap ")): + return "Access Point" + if any(token in raw for token in ("switch", "usw")): + return "Switch" + if any(token in raw for token in ("gateway", "udm", "uxg", "ucg", "router")): + return "Gateway" + return _first(device, ("type", "productLine", "category"), "Device") + + +def _status(device: Dict[str, Any]) -> str: + return _first(device, ("state", "status", "connectionState", "adoptionState"), "unknown") + + +def _count_by(items: Iterable[Dict[str, Any]], fn) -> Dict[str, int]: + counts: Dict[str, int] = {} + for item in items: + key = fn(item) or "Unknown" + counts[key] = counts.get(key, 0) + 1 + return dict(sorted(counts.items(), key=lambda kv: (-kv[1], kv[0]))) + + +def _table(headers: List[str], rows: List[List[Any]], empty: str = "No data captured.") -> str: + if not rows: + return f"

    {html.escape(empty)}

    " + head = "".join(f"{html.escape(str(h))}" for h in headers) + body = [] + for row in rows: + body.append("" + "".join(f"{html.escape(str(cell if cell is not None else ''))}" for cell in row) + "") + return f"{head}{''.join(body)}
    " + + +def _summary_cards(cards: List[tuple[str, Any]]) -> str: + return "
    " + "".join( + f"
    {html.escape(str(value))}
    {html.escape(label)}
    " + for label, value in cards + ) + "
    " + + +def _read_site_file(source: Path, site_summary: Dict[str, Any], key: str) -> List[Dict[str, Any]]: + rel = (site_summary.get("files") or {}).get(key) + return _items(_load_json(source / rel, [])) if rel else [] + + +def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: + source = Path(source_dir) + output = Path(output_dir) + output.mkdir(parents=True, exist_ok=True) + + summary = _load_json(source / "collection_summary.json", {}) + sm = summary.get("siteManager") if isinstance(summary.get("siteManager"), dict) else {} + net = summary.get("networkApplication") if isinstance(summary.get("networkApplication"), dict) else {} + metadata = summary.get("metadata") if isinstance(summary.get("metadata"), dict) else {} + + sm_sites = _items(_load_json(source / str((sm.get("files") or {}).get("sites", "")), [])) if sm.get("files") else [] + sm_devices = _items(_load_json(source / str((sm.get("files") or {}).get("devices", "")), [])) if sm.get("files") else [] + site_summaries = _items(_load_json(source / "network_site_summaries.json", [])) + + all_devices: List[Dict[str, Any]] = [] + all_clients: List[Dict[str, Any]] = [] + for site in site_summaries: + all_devices.extend(_read_site_file(source, site, "devices")) + all_clients.extend(_read_site_file(source, site, "clients")) + if not all_devices: + all_devices = sm_devices + + role_counts = _count_by(all_devices, _device_role) + status_counts = _count_by(all_devices, _status) + cards = [ + ("Sites", len(site_summaries) or len(sm_sites)), + ("Devices", len(all_devices)), + ("Clients", len(all_clients)), + ("Switches", role_counts.get("Switch", 0)), + ("APs", role_counts.get("Access Point", 0)), + ("Gateways", role_counts.get("Gateway", 0)), + ] + + sections: List[str] = [] + sections.append("

    Executive Summary

    ") + sections.append(_summary_cards(cards)) + guidance = [ + "This first UniFi report is intentionally coverage-oriented: it proves API access, preserves raw JSON backups, and surfaces what the controller exposes for inventory, clients, networks, WiFi, and security policy.", + "If local Network Application credentials are available, this report should become the primary disaster-recovery and migration source because it captures site-scoped configuration instead of only cloud-level status.", + "Endpoint failures are listed explicitly so we can refine the collector against the exact UniFi Network version without losing the data that was available.", + ] + sections.append("
      " + "".join(f"
    • {html.escape(x)}
    • " for x in guidance) + "
    ") + + sections.append("

    Collection Coverage

    ") + rows = [ + ["Requested mode", metadata.get("requestedMode", "")], + ["Effective mode", metadata.get("effectiveMode", "")], + ["Collected at", metadata.get("collectedAt", "")], + ["Site Manager", "enabled" if sm.get("enabled") else f"not used: {sm.get('reason', '')}"], + ["Network Application", "enabled" if net.get("enabled") else f"not used: {net.get('reason', '')}"], + ] + sections.append(_table(["Item", "Value"], rows)) + errors = list(sm.get("errors") or []) + list(net.get("errors") or []) + error_rows = [[e.get("label", ""), e.get("status", ""), e.get("path", ""), e.get("error", "")[:180]] for e in errors] + sections.append("

    Endpoint Gaps / Errors

    ") + sections.append(_table(["Endpoint", "Status", "Path", "Error"], error_rows, "No endpoint errors captured.")) + sections.append("
    ") + + sections.append("

    Device Inventory

    ") + role_rows = [[k, v] for k, v in role_counts.items()] + status_rows = [[k, v] for k, v in status_counts.items()] + sections.append("

    By Role

    " + _table(["Role", "Count"], role_rows) + "
    ") + sections.append("

    By Status

    " + _table(["Status", "Count"], status_rows) + "
    ") + device_rows = [] + for dev in all_devices[:300]: + uidb = dev.get("uidb") if isinstance(dev.get("uidb"), dict) else {} + device_rows.append([ + _first(dev, ("name", "displayName", "hostname"), _nested(dev, ("meta", "name"), "")), + _device_role(dev), + _first(dev, ("model", "modelName"), _first(uidb, ("model", "name"), "")), + _status(dev), + _first(dev, ("ipAddress", "ip", "lastIp"), ""), + _first(dev, ("macAddress", "mac", "id"), ""), + _first(dev, ("version", "firmwareVersion"), ""), + ]) + sections.append(_table(["Name", "Role", "Model", "Status", "IP", "MAC / ID", "Firmware"], device_rows)) + sections.append("
    ") + + sections.append("

    Sites, Networks, VLANs, and DHCP

    ") + for site in site_summaries: + sections.append(f"

    {html.escape(str(site.get('name') or site.get('id') or 'Site'))}

    ") + networks = _read_site_file(source, site, "networks") + rows = [] + for netw in networks: + rows.append([ + _first(netw, ("name", "displayName")), + _first(netw, ("purpose", "type")), + _first(netw, ("vlanId", "vlan", "vlan_id")), + _first(netw, ("subnet", "ipSubnet", "networkGroup")), + _first(netw, ("gatewayIp", "gateway", "dhcpRelayServer")), + _first(netw, ("dhcpMode", "dhcpEnabled", "dhcpd_enabled")), + ]) + sections.append(_table(["Network", "Purpose", "VLAN", "Subnet", "Gateway", "DHCP"], rows, "No network/VLAN endpoint data captured for this site.")) + if not site_summaries: + sections.append("

    No local Network Application site detail captured yet.

    ") + sections.append("
    ") + + sections.append("

    WiFi and Client Visibility

    ") + for site in site_summaries: + wifi = _read_site_file(source, site, "wifi") + rows = [] + for wlan in wifi: + rows.append([ + _first(wlan, ("name", "ssid")), + _first(wlan, ("enabled", "isEnabled")), + _first(wlan, ("securityProtocol", "security", "authMode")), + _first(wlan, ("networkId", "networkName", "vlanId")), + _first(wlan, ("band", "apGroupIds")), + ]) + sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") + sections.append(_table(["SSID", "Enabled", "Security", "Network / VLAN", "Band / AP Groups"], rows, "No WiFi endpoint data captured for this site.")) + client_rows = [] + for client in all_clients[:300]: + client_rows.append([ + _first(client, ("name", "hostname", "displayName")), + _first(client, ("type", "connectionType")), + _first(client, ("ipAddress", "ip")), + _first(client, ("macAddress", "mac", "id")), + _first(client, ("networkName", "vlanId", "networkId")), + _first(client, ("connectedAt", "lastSeen")), + ]) + sections.append("

    Connected Clients

    ") + sections.append(_table(["Name", "Type", "IP", "MAC / ID", "Network / VLAN", "Seen"], client_rows, "No client detail captured.")) + sections.append("
    ") + + sections.append("

    Firewall and Policy Backup

    ") + for site in site_summaries: + sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") + for key, label in ( + ("firewall_zones", "Firewall Zones"), + ("firewall_policies", "Firewall Policies"), + ("acl_rules", "ACL Rules"), + ("traffic_lists", "Traffic Lists"), + ("dns_policies", "DNS Policies"), + ): + data = _read_site_file(source, site, key) + rows = [[_first(item, ("name", "description", "id")), _first(item, ("enabled", "action", "type")), _first(item, ("id", "_id"))] for item in data[:100]] + sections.append(f"

    {html.escape(label)}

    ") + sections.append(_table(["Name", "State / Action", "ID"], rows, f"No {label.lower()} endpoint data captured.")) + sections.append("
    ") + + sections.append("

    Raw Backup Files

    ") + files = sorted(str(p.relative_to(source)) for p in source.rglob("*.json")) + sections.append(_table(["JSON backup"], [[f] for f in files], "No JSON backup files found.")) + sections.append("
    ") + + html_doc = _html_shell("TM UniFi Baseline", "\n".join(sections), metadata) + html_path = output / "report.html" + pdf_path = output / "report.pdf" + html_path.write_text(html_doc, encoding="utf-8") + rendered = _render_pdf(html_path, pdf_path) + return {"html": str(html_path), "pdf": str(pdf_path) if rendered else ""} + + +def _html_shell(title: str, body: str, metadata: Dict[str, Any]) -> str: + release = datetime.now().strftime("%Y_%m_%d") + collected = metadata.get("collectedAt") or "not captured" + return f""" + + + + {html.escape(title)} + + + +
    +

    TM UniFi Baseline

    +

    UniFi Network Report

    +

    Inventory, configuration backup coverage, client visibility, and migration planning inputs.

    +

    Collected: {html.escape(str(collected))}

    +
    + {body} + +""" + + +def _render_pdf(html_path: Path, pdf_path: Path) -> bool: + try: + from weasyprint import HTML + + HTML(filename=str(html_path)).write_pdf(str(pdf_path)) + return True + except Exception: + tool = shutil.which("wkhtmltopdf") + if not tool: + return False + subprocess.run([tool, str(html_path), str(pdf_path)], check=True) + return True + + +def main(argv: List[str] | None = None) -> int: + parser = argparse.ArgumentParser(description="Generate UniFi report from collected JSON.") + parser.add_argument("--source-dir", default=str(ROOT / "unifi" / "backups" / "latest")) + parser.add_argument("--output-dir", default=str(ROOT / "unifi" / "reports" / "latest")) + parser.add_argument("--pdf-only", action="store_true") + args = parser.parse_args(argv) + paths = build_report(args.source_dir, args.output_dir) + if args.pdf_only and paths.get("pdf"): + try: + Path(paths["html"]).unlink() + except FileNotFoundError: + pass + print(json.dumps(paths, indent=2)) + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) + diff --git a/unifi/run.sh b/unifi/run.sh new file mode 100755 index 0000000..61ee1db --- /dev/null +++ b/unifi/run.sh @@ -0,0 +1,181 @@ +#!/usr/bin/env bash +# UniFi Network Report Suite — runner + +set -uo pipefail +cd "$(dirname "$0")/.." + +usage() { + echo "Usage: ./unifi/run.sh [options]" + echo "" + echo " --mode " + echo " API collection mode. Default: auto" + echo " --site-id Limit local Network Application collection to one site ID" + echo " --report-only Skip API collection; build report from unifi/backups/latest" + echo " --backups-dir " + echo " Backup JSON directory. Default: unifi/backups/latest" + echo " --reports-dir " + echo " Report output directory. Default: unifi/reports/latest" + echo " --keep-html Keep report.html alongside report.pdf" + echo " --health-check Validate local environment and exit" + echo " --no-open Do not open generated report after a successful run" + echo " --help Show this help" + echo "" + echo " Env examples:" + echo " UNIFI_SITE_MANAGER_API_KEY=... ./unifi/run.sh" + echo " UNIFI_NETWORK_BASE_URL=https://192.168.1.1 UNIFI_NETWORK_API_KEY=... ./unifi/run.sh" + echo " UNIFI_VERIFY_SSL=0 ./unifi/run.sh --mode network" +} + +MODE="${UNIFI_COLLECTION_MODE:-auto}" +REPORT_ONLY=0 +NO_OPEN=0 +HEALTH_CHECK=0 +KEEP_HTML=0 +SITE_ID="${UNIFI_SITE_ID:-}" +BACKUPS_DIR="unifi/backups/latest" +REPORTS_DIR="unifi/reports/latest" + +while [[ $# -gt 0 ]]; do + case "$1" in + --mode) + MODE="${2:-}" + if [[ -z "$MODE" || "$MODE" == --* ]]; then + echo "Missing value for $1" >&2 + exit 2 + fi + shift 2 + ;; + --site-id) + SITE_ID="${2:-}" + if [[ -z "$SITE_ID" || "$SITE_ID" == --* ]]; then + echo "Missing value for $1" >&2 + exit 2 + fi + shift 2 + ;; + --report-only) + REPORT_ONLY=1 + shift + ;; + --backups-dir) + BACKUPS_DIR="${2:-}" + if [[ -z "$BACKUPS_DIR" || "$BACKUPS_DIR" == --* ]]; then + echo "Missing value for $1" >&2 + exit 2 + fi + shift 2 + ;; + --reports-dir) + REPORTS_DIR="${2:-}" + if [[ -z "$REPORTS_DIR" || "$REPORTS_DIR" == --* ]]; then + echo "Missing value for $1" >&2 + exit 2 + fi + shift 2 + ;; + --keep-html) + KEEP_HTML=1 + shift + ;; + --health-check) + HEALTH_CHECK=1 + shift + ;; + --no-open) + NO_OPEN=1 + shift + ;; + --help|-h) + usage + exit 0 + ;; + *) + echo "Unknown option: $1" >&2 + usage >&2 + exit 2 + ;; + esac +done + +if [[ -z "${PYTHON_BIN:-}" ]]; then + if [[ -x ".venv/bin/python" ]]; then + PYTHON_BIN=".venv/bin/python" + elif command -v python3 >/dev/null 2>&1; then + PYTHON_BIN="$(command -v python3)" + else + PYTHON_BIN="python3" + fi +fi + +run_stage() { + local label="$1" + shift + echo "" + echo "==> $label" + "$@" +} + +echo "" +echo "UniFi Network Report Suite" +echo "Mode: $MODE" +echo "Backups: $BACKUPS_DIR" +echo "Reports: $REPORTS_DIR" + +if (( HEALTH_CHECK == 1 )); then + health_args=(--backups-dir "$BACKUPS_DIR") + if (( REPORT_ONLY == 1 )); then + health_args+=(--report-only) + fi + "$PYTHON_BIN" -m unifi.health "${health_args[@]}" + exit $? +fi + +failures=0 +health_args=(--backups-dir "$BACKUPS_DIR") +if (( REPORT_ONLY == 1 )); then + health_args+=(--report-only) +fi +run_stage "Environment Validation" "$PYTHON_BIN" -m unifi.health "${health_args[@]}" || failures=$((failures + 1)) + +if (( failures == 0 )); then + if (( REPORT_ONLY == 0 )); then + collect_args=(--mode "$MODE" --output-dir "$BACKUPS_DIR") + if [[ -n "$SITE_ID" ]]; then + collect_args+=(--site-id "$SITE_ID") + fi + run_stage "Query UniFi API" "$PYTHON_BIN" -m unifi.collect "${collect_args[@]}" || failures=$((failures + 1)) + else + echo "" + echo "==> Query UniFi API" + echo "Skipped by --report-only" + fi +fi + +if (( failures == 0 )); then + report_args=(--source-dir "$BACKUPS_DIR" --output-dir "$REPORTS_DIR") + if (( KEEP_HTML == 0 )); then + report_args+=(--pdf-only) + fi + run_stage "Generate UniFi Report" "$PYTHON_BIN" -m unifi.report "${report_args[@]}" || failures=$((failures + 1)) +fi + +if (( failures == 0 )); then + run_stage "Report Inventory" "$PYTHON_BIN" -m unifi.inventory --backups-dir "$BACKUPS_DIR" --reports-dir "$REPORTS_DIR" || failures=$((failures + 1)) +fi + +if (( failures == 0 )); then + echo "" + echo "All UniFi stages passed." + if (( NO_OPEN == 0 )) && [[ -f "$REPORTS_DIR/report.pdf" ]]; then + if command -v open >/dev/null 2>&1; then + open "$REPORTS_DIR/report.pdf" + elif command -v xdg-open >/dev/null 2>&1; then + xdg-open "$REPORTS_DIR/report.pdf" + fi + fi +else + echo "" + echo "$failures UniFi stage(s) failed." +fi + +exit "$failures" From fc201c899ec50200032ba2877e7b3408951f26ac Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:13:48 -0400 Subject: [PATCH 18/47] Support UniFi remote connector mode --- unifi/.env.example | 6 ++++ unifi/README.md | 11 ++++-- unifi/collect.py | 88 ++++++++++++++++++++++++++++++++++++++-------- unifi/health.py | 11 ++++-- unifi/run.sh | 15 ++++++++ 5 files changed, 112 insertions(+), 19 deletions(-) diff --git a/unifi/.env.example b/unifi/.env.example index c284c02..f3206db 100644 --- a/unifi/.env.example +++ b/unifi/.env.example @@ -6,6 +6,12 @@ # UNIFI_NETWORK_API_KEY= # UNIFI_VERIFY_SSL=0 +# Remote UniFi Network connector via api.ui.com +# Usually requires a cloud/account API key with access to the console; a local +# Network Integrations key may return 401 against the remote connector. +# UNIFI_NETWORK_CONSOLE_ID=58D...:123 +# UNIFI_NETWORK_API_KEY= + # Optional # UNIFI_COLLECTION_MODE=auto # UNIFI_SITE_ID= diff --git a/unifi/README.md b/unifi/README.md index 8bbad5e..2c91bef 100644 --- a/unifi/README.md +++ b/unifi/README.md @@ -7,7 +7,8 @@ modify or call the Meraki runner. - `site-manager`: uses the official cloud Site Manager API at `https://api.ui.com/v1`. - `network`: uses the local UniFi Network Application Integration API under - `/proxy/network/integration/v1`. + `/proxy/network/integration/v1`, or the remote connector form under + `https://api.ui.com/v1/connector/consoles/{consoleId}/network/integration/v1`. - `both`: collects both surfaces. - `auto`: default. Uses the configured surface(s). @@ -23,6 +24,12 @@ UNIFI_SITE_MANAGER_API_KEY=... UNIFI_NETWORK_BASE_URL=https://192.168.1.1 UNIFI_NETWORK_API_KEY=... UNIFI_VERIFY_SSL=0 + +# Remote Network Application connector +# This usually requires an API key from the UniFi account/API key area with +# access to the console. A local Network Integrations key may return 401 here. +UNIFI_NETWORK_CONSOLE_ID=58D...:123 +UNIFI_NETWORK_API_KEY=... ``` For the local Network Application API, create an API key in UniFi Network under @@ -36,6 +43,7 @@ available on a given controller. ```sh ./unifi/run.sh ./unifi/run.sh --mode network --no-open +./unifi/run.sh --mode network --console-id 58D...:123 --site-id default --no-open ./unifi/run.sh --report-only --keep-html --no-open ./unifi/run.sh --health-check ``` @@ -44,4 +52,3 @@ Outputs are written to: - `unifi/backups/latest/` for raw JSON backups - `unifi/reports/latest/` for `report.pdf`, `report.html`, and inventory data - diff --git a/unifi/collect.py b/unifi/collect.py index 4b1424a..8668596 100644 --- a/unifi/collect.py +++ b/unifi/collect.py @@ -12,7 +12,7 @@ ROOT = Path(__file__).resolve().parents[1] -NETWORK_PREFIX = "/proxy/network/integration/v1" +LOCAL_NETWORK_PREFIX = "/proxy/network/integration/v1" SOURCE_NOTES = [ { "name": "Official UniFi API overview", @@ -63,6 +63,21 @@ def _site_name(site: Dict[str, Any]) -> str: return str(site.get("name") or meta.get("name") or site.get("description") or _site_id(site) or "Default") +def _site_matches(site: Dict[str, Any], selector: str) -> bool: + if not selector: + return True + wanted = selector.strip().lower() + values = { + str(site.get("id") or ""), + str(site.get("siteId") or ""), + str(site.get("_id") or ""), + str(site.get("internalReference") or ""), + str(site.get("name") or ""), + _site_name(site), + } + return wanted in {value.strip().lower() for value in values if value} + + def _call_list(client: UniFiClient, path: str, *, style: str, label: str, errors: List[Dict[str, Any]]) -> List[Any]: try: return client.paged_get(path, style=style) @@ -101,22 +116,50 @@ def collect_site_manager(output: Path) -> Dict[str, Any]: return summary -def collect_network_application(output: Path, selected_site_id: str = "") -> Dict[str, Any]: +def _fatal_auth_errors(summary: Dict[str, Any]) -> List[Dict[str, Any]]: + fatal: List[Dict[str, Any]] = [] + for surface in ("siteManager", "networkApplication"): + payload = summary.get(surface) + if not isinstance(payload, dict) or not payload.get("enabled"): + continue + for error in payload.get("errors") or []: + if not isinstance(error, dict): + continue + if error.get("label") in {"site_manager_sites", "network_sites"} and error.get("status") in {401, 403}: + fatal.append({"surface": surface, **error}) + return fatal + + +def collect_network_application(output: Path, selected_site_id: str = "", console_id: str = "") -> Dict[str, Any]: api_key = os.getenv("UNIFI_NETWORK_API_KEY") or os.getenv("UNIFI_API_KEY") base_url = os.getenv("UNIFI_NETWORK_BASE_URL") or os.getenv("UNIFI_BASE_URL") - if not api_key or not base_url: - return {"enabled": False, "reason": "UNIFI_NETWORK_BASE_URL and UNIFI_NETWORK_API_KEY are not set"} + console_id = console_id or os.getenv("UNIFI_NETWORK_CONSOLE_ID", "") + if not api_key: + return {"enabled": False, "reason": "UNIFI_NETWORK_API_KEY is not set"} + if not base_url and not console_id: + return {"enabled": False, "reason": "Set UNIFI_NETWORK_BASE_URL for local access or UNIFI_NETWORK_CONSOLE_ID for remote connector access"} + + connection_type = "remote" if console_id and not base_url else "local" + if connection_type == "remote": + base_url = os.getenv("UNIFI_NETWORK_REMOTE_BASE_URL", "https://api.ui.com") + network_prefix = f"/v1/connector/consoles/{console_id}/network/integration/v1" + verify_ssl = True + else: + network_prefix = LOCAL_NETWORK_PREFIX + verify_ssl = _bool_env("UNIFI_VERIFY_SSL", False) client = UniFiClient( - base_url, + base_url or "", api_key, timeout=int(os.getenv("UNIFI_REQUEST_TIMEOUT", "30")), - verify_ssl=_bool_env("UNIFI_VERIFY_SSL", False), + verify_ssl=verify_ssl, ) errors: List[Dict[str, Any]] = [] summary: Dict[str, Any] = { "enabled": True, "baseUrl": client.base_url, + "connectionType": connection_type, + "consoleId": console_id or None, "verifySsl": client.verify_ssl, "files": {}, "counts": {}, @@ -124,16 +167,16 @@ def collect_network_application(output: Path, selected_site_id: str = "") -> Dic } try: - info = client.get_json(f"{NETWORK_PREFIX}/info") + info = client.get_json(f"{network_prefix}/info") except UniFiRequestError as exc: info = {"error": str(exc), "status": exc.status} - errors.append({"label": "network_info", "path": f"{NETWORK_PREFIX}/info", "status": exc.status, "error": str(exc)}) + errors.append({"label": "network_info", "path": f"{network_prefix}/info", "status": exc.status, "error": str(exc)}) _write_json(output / "network_info.json", info) summary["files"]["info"] = "network_info.json" - sites = _call_list(client, f"{NETWORK_PREFIX}/sites", style="offset", label="network_sites", errors=errors) + sites = _call_list(client, f"{network_prefix}/sites", style="offset", label="network_sites", errors=errors) if selected_site_id: - sites = [site for site in _items(sites) if _site_id(site) == selected_site_id] + sites = [site for site in _items(sites) if _site_matches(site, selected_site_id)] _write_json(output / "network_sites.json", sites) summary["files"]["sites"] = "network_sites.json" summary["counts"]["sites"] = len(sites) @@ -164,7 +207,7 @@ def collect_network_application(output: Path, selected_site_id: str = "") -> Dic safe = _safe_name(name or sid) site_summary: Dict[str, Any] = {"id": sid, "name": name, "files": {}, "counts": {}} for label, suffix in site_endpoints: - path = f"{NETWORK_PREFIX}/sites/{sid}/{suffix}" + path = f"{network_prefix}/sites/{sid}/{suffix}" data = _call_list(client, path, style="offset", label=f"{name}:{label}", errors=errors) rel = f"sites/{safe}/{label}.json" _write_json(output / rel, data) @@ -182,6 +225,7 @@ def main(argv: List[str] | None = None) -> int: parser = argparse.ArgumentParser(description="Collect UniFi Site Manager and Network Application data.") parser.add_argument("--mode", choices=["auto", "site-manager", "network", "both"], default=os.getenv("UNIFI_COLLECTION_MODE", "auto")) parser.add_argument("--site-id", default=os.getenv("UNIFI_SITE_ID", "")) + parser.add_argument("--console-id", default=os.getenv("UNIFI_NETWORK_CONSOLE_ID", "")) parser.add_argument("--output-dir", default=str(ROOT / "unifi" / "backups" / "latest")) args = parser.parse_args(argv) @@ -191,7 +235,15 @@ def main(argv: List[str] | None = None) -> int: mode = args.mode if mode == "auto": - has_network = bool((os.getenv("UNIFI_NETWORK_API_KEY") or os.getenv("UNIFI_API_KEY")) and (os.getenv("UNIFI_NETWORK_BASE_URL") or os.getenv("UNIFI_BASE_URL"))) + has_network = bool( + (os.getenv("UNIFI_NETWORK_API_KEY") or os.getenv("UNIFI_API_KEY")) + and ( + os.getenv("UNIFI_NETWORK_BASE_URL") + or os.getenv("UNIFI_BASE_URL") + or args.console_id + or os.getenv("UNIFI_NETWORK_CONSOLE_ID") + ) + ) has_site_manager = bool(os.getenv("UNIFI_SITE_MANAGER_API_KEY") or os.getenv("UNIFI_API_KEY")) if has_network and has_site_manager: mode = "both" @@ -201,7 +253,7 @@ def main(argv: List[str] | None = None) -> int: mode = "site-manager" else: print("Missing UniFi API configuration.", file=sys.stderr) - print("Set UNIFI_NETWORK_BASE_URL + UNIFI_NETWORK_API_KEY, or UNIFI_SITE_MANAGER_API_KEY.", file=sys.stderr) + print("Set UNIFI_NETWORK_BASE_URL or UNIFI_NETWORK_CONSOLE_ID with UNIFI_NETWORK_API_KEY, or set UNIFI_SITE_MANAGER_API_KEY.", file=sys.stderr) return 1 metadata: Dict[str, Any] = { @@ -210,19 +262,25 @@ def main(argv: List[str] | None = None) -> int: "effectiveMode": mode, "sourceNotes": SOURCE_NOTES, "siteIdFilter": args.site_id or None, + "consoleId": args.console_id or None, } summary: Dict[str, Any] = {"metadata": metadata} if mode in {"site-manager", "both"}: summary["siteManager"] = collect_site_manager(output) if mode in {"network", "both"}: - summary["networkApplication"] = collect_network_application(output, args.site_id) + summary["networkApplication"] = collect_network_application(output, args.site_id, args.console_id) _write_json(output / "collection_summary.json", summary) print(f"Collected UniFi data into {output}") print(json.dumps(summary, indent=2)) + fatal = _fatal_auth_errors(summary) + if fatal: + print("Fatal UniFi authorization failure on required site-discovery endpoint.", file=sys.stderr) + for err in fatal: + print(f"- {err.get('surface')} {err.get('label')}: HTTP {err.get('status')}", file=sys.stderr) + return 1 return 0 if __name__ == "__main__": raise SystemExit(main()) - diff --git a/unifi/health.py b/unifi/health.py index ed334b4..1914131 100644 --- a/unifi/health.py +++ b/unifi/health.py @@ -21,7 +21,14 @@ def main(argv: list[str] | None = None) -> int: print(f"Python: {sys.version.split()[0]}") site_manager = bool(os.getenv("UNIFI_SITE_MANAGER_API_KEY") or os.getenv("UNIFI_API_KEY")) - network = bool((os.getenv("UNIFI_NETWORK_API_KEY") or os.getenv("UNIFI_API_KEY")) and (os.getenv("UNIFI_NETWORK_BASE_URL") or os.getenv("UNIFI_BASE_URL"))) + network = bool( + (os.getenv("UNIFI_NETWORK_API_KEY") or os.getenv("UNIFI_API_KEY")) + and ( + os.getenv("UNIFI_NETWORK_BASE_URL") + or os.getenv("UNIFI_BASE_URL") + or os.getenv("UNIFI_NETWORK_CONSOLE_ID") + ) + ) print(f"Site Manager API config: {'ok' if site_manager else 'missing'}") print(f"Network Application API config: {'ok' if network else 'missing'}") @@ -34,7 +41,7 @@ def main(argv: list[str] | None = None) -> int: print(f"Existing UniFi backup: missing ({backup_summary})") elif not site_manager and not network: failures += 1 - print("Set either UNIFI_SITE_MANAGER_API_KEY or UNIFI_NETWORK_BASE_URL + UNIFI_NETWORK_API_KEY.") + print("Set either UNIFI_SITE_MANAGER_API_KEY, or UNIFI_NETWORK_API_KEY plus UNIFI_NETWORK_BASE_URL/UNIFI_NETWORK_CONSOLE_ID.") try: import weasyprint # noqa: F401 diff --git a/unifi/run.sh b/unifi/run.sh index 61ee1db..9b5e5c7 100755 --- a/unifi/run.sh +++ b/unifi/run.sh @@ -10,6 +10,8 @@ usage() { echo " --mode " echo " API collection mode. Default: auto" echo " --site-id Limit local Network Application collection to one site ID" + echo " --console-id " + echo " Use api.ui.com remote connector for this console ID" echo " --report-only Skip API collection; build report from unifi/backups/latest" echo " --backups-dir " echo " Backup JSON directory. Default: unifi/backups/latest" @@ -23,6 +25,7 @@ usage() { echo " Env examples:" echo " UNIFI_SITE_MANAGER_API_KEY=... ./unifi/run.sh" echo " UNIFI_NETWORK_BASE_URL=https://192.168.1.1 UNIFI_NETWORK_API_KEY=... ./unifi/run.sh" + echo " UNIFI_NETWORK_CONSOLE_ID=58D...:123 UNIFI_NETWORK_API_KEY=... ./unifi/run.sh" echo " UNIFI_VERIFY_SSL=0 ./unifi/run.sh --mode network" } @@ -32,6 +35,7 @@ NO_OPEN=0 HEALTH_CHECK=0 KEEP_HTML=0 SITE_ID="${UNIFI_SITE_ID:-}" +CONSOLE_ID="${UNIFI_NETWORK_CONSOLE_ID:-}" BACKUPS_DIR="unifi/backups/latest" REPORTS_DIR="unifi/reports/latest" @@ -53,6 +57,14 @@ while [[ $# -gt 0 ]]; do fi shift 2 ;; + --console-id) + CONSOLE_ID="${2:-}" + if [[ -z "$CONSOLE_ID" || "$CONSOLE_ID" == --* ]]; then + echo "Missing value for $1" >&2 + exit 2 + fi + shift 2 + ;; --report-only) REPORT_ONLY=1 shift @@ -143,6 +155,9 @@ if (( failures == 0 )); then if [[ -n "$SITE_ID" ]]; then collect_args+=(--site-id "$SITE_ID") fi + if [[ -n "$CONSOLE_ID" ]]; then + collect_args+=(--console-id "$CONSOLE_ID") + fi run_stage "Query UniFi API" "$PYTHON_BIN" -m unifi.collect "${collect_args[@]}" || failures=$((failures + 1)) else echo "" From c69c36854e413dd0ba5aecd2595117e850dbe944 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:16:17 -0400 Subject: [PATCH 19/47] Load UniFi env before parsing defaults --- unifi/collect.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/unifi/collect.py b/unifi/collect.py index 8668596..dfda534 100644 --- a/unifi/collect.py +++ b/unifi/collect.py @@ -222,6 +222,7 @@ def collect_network_application(output: Path, selected_site_id: str = "", consol def main(argv: List[str] | None = None) -> int: + load_env() parser = argparse.ArgumentParser(description="Collect UniFi Site Manager and Network Application data.") parser.add_argument("--mode", choices=["auto", "site-manager", "network", "both"], default=os.getenv("UNIFI_COLLECTION_MODE", "auto")) parser.add_argument("--site-id", default=os.getenv("UNIFI_SITE_ID", "")) @@ -229,7 +230,6 @@ def main(argv: List[str] | None = None) -> int: parser.add_argument("--output-dir", default=str(ROOT / "unifi" / "backups" / "latest")) args = parser.parse_args(argv) - load_env() output = Path(args.output_dir) output.mkdir(parents=True, exist_ok=True) From 3f5598d18853fa802bfc1d582bd325e9c58c0997 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:21:09 -0400 Subject: [PATCH 20/47] Add UniFi saved site profile runner --- tests/test_unifi_report.py | 41 +++++++++++ unifi/.env.example | 13 ++++ unifi/README.md | 21 ++++++ unifi/profiles.py | 73 ++++++++++++++++++ unifi/report.py | 22 +++++- unifi/run.sh | 33 +++++++++ unifi/run_sites.py | 147 +++++++++++++++++++++++++++++++++++++ 7 files changed, 349 insertions(+), 1 deletion(-) create mode 100644 unifi/profiles.py create mode 100644 unifi/run_sites.py diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 3ccc0aa..007d3a5 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -2,6 +2,7 @@ from pathlib import Path from unifi.report import build_report +from unifi.profiles import discover_site_profiles def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): @@ -58,3 +59,43 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "USW-48" in html assert "Staff WiFi" in html assert "Firewall Zones" in html + + +def test_unifi_profiles_discovers_numbered_site_profiles(monkeypatch): + monkeypatch.setenv("UNIFI_SITE1_NAME", "First Campus") + monkeypatch.setenv("UNIFI_SITE1_API_KEY", "secret-one") + monkeypatch.setenv("UNIFI_SITE1_CONSOLE_ID", "console-1") + monkeypatch.setenv("UNIFI_SITE1_SITE_ID", "default") + monkeypatch.setenv("UNIFI_SITE2_API_KEY", "secret-two") + monkeypatch.setenv("UNIFI_SITE2_BASE_URL", "https://10.0.0.1") + + profiles = discover_site_profiles(load_files=False) + + assert [profile.key for profile in profiles] == ["site1", "site2"] + assert profiles[0].safe_name == "First_Campus" + assert profiles[0].env_updates()["UNIFI_NETWORK_CONSOLE_ID"] == "console-1" + assert profiles[1].env_updates()["UNIFI_NETWORK_BASE_URL"] == "https://10.0.0.1" + + +def test_unifi_report_surfaces_remote_connector_auth_guidance(tmp_path: Path): + source = tmp_path / "backup" + source.mkdir() + (source / "collection_summary.json").write_text( + json.dumps( + { + "metadata": {"requestedMode": "network", "effectiveMode": "network"}, + "networkApplication": { + "enabled": True, + "connectionType": "remote", + "errors": [{"label": "network_sites", "status": 401, "path": "/remote", "error": "unauthorized"}], + }, + } + ), + encoding="utf-8", + ) + output = tmp_path / "report" + paths = build_report(str(source), str(output)) + html = Path(paths["html"]).read_text(encoding="utf-8") + + assert "Credential / Access Fix" in html + assert "cloud/account API key with console access" in html diff --git a/unifi/.env.example b/unifi/.env.example index f3206db..01789c8 100644 --- a/unifi/.env.example +++ b/unifi/.env.example @@ -16,3 +16,16 @@ # UNIFI_COLLECTION_MODE=auto # UNIFI_SITE_ID= # UNIFI_REQUEST_TIMEOUT=30 + +# Optional saved profiles for ./unifi/run.sh --all-sites +# UNIFI_SITE1_NAME=First Campus +# UNIFI_SITE1_CONSOLE_ID=58D...:123 +# UNIFI_SITE1_API_KEY= +# UNIFI_SITE1_SITE_ID=default +# UNIFI_SITE1_BASE_URL=https:// +# +# UNIFI_SITE2_NAME=Second Campus +# UNIFI_SITE2_CONSOLE_ID=58D...:456 +# UNIFI_SITE2_API_KEY= +# UNIFI_SITE2_SITE_ID=default +# UNIFI_SITE2_BASE_URL=https:// diff --git a/unifi/README.md b/unifi/README.md index 2c91bef..a8c3bb1 100644 --- a/unifi/README.md +++ b/unifi/README.md @@ -32,6 +32,20 @@ UNIFI_NETWORK_CONSOLE_ID=58D...:123 UNIFI_NETWORK_API_KEY=... ``` +For multiple saved customer/site entries, add numbered profile variables: + +```sh +UNIFI_SITE1_NAME=First Campus +UNIFI_SITE1_API_KEY=... +UNIFI_SITE1_CONSOLE_ID=58D...:123 +UNIFI_SITE1_SITE_ID=default + +UNIFI_SITE2_NAME=Second Campus +UNIFI_SITE2_API_KEY=... +UNIFI_SITE2_BASE_URL=https://192.168.10.1 +UNIFI_SITE2_SITE_ID=default +``` + For the local Network Application API, create an API key in UniFi Network under Settings > Control Plane > Integrations. Ubiquiti says the local Network API documentation is specific to the installed Network version, so the collector @@ -44,6 +58,8 @@ available on a given controller. ./unifi/run.sh ./unifi/run.sh --mode network --no-open ./unifi/run.sh --mode network --console-id 58D...:123 --site-id default --no-open +./unifi/run.sh --all-sites --no-open +./unifi/run.sh --all-sites --profile site1 --no-open ./unifi/run.sh --report-only --keep-html --no-open ./unifi/run.sh --health-check ``` @@ -52,3 +68,8 @@ Outputs are written to: - `unifi/backups/latest/` for raw JSON backups - `unifi/reports/latest/` for `report.pdf`, `report.html`, and inventory data + +When `--all-sites` is used, outputs are separated by saved profile: + +- `unifi/backups/sites/site1/` +- `unifi/reports/sites/site1/` diff --git a/unifi/profiles.py b/unifi/profiles.py new file mode 100644 index 0000000..6e1385a --- /dev/null +++ b/unifi/profiles.py @@ -0,0 +1,73 @@ +import os +import re +from dataclasses import dataclass +from typing import Dict, Iterable, List + +from .env import load_env + + +@dataclass(frozen=True) +class UniFiSiteProfile: + key: str + name: str + api_key: str + site_id: str = "default" + console_id: str = "" + base_url: str = "" + verify_ssl: str = "0" + + @property + def safe_name(self) -> str: + clean = "".join(ch if ch.isalnum() or ch in "-_" else "_" for ch in self.name.strip()) + return clean.strip("_") or self.key + + def env_updates(self) -> Dict[str, str]: + updates = { + "UNIFI_COLLECTION_MODE": "network", + "UNIFI_NETWORK_API_KEY": self.api_key, + "UNIFI_SITE_ID": self.site_id or "default", + "UNIFI_VERIFY_SSL": self.verify_ssl or "0", + } + if self.base_url: + updates["UNIFI_NETWORK_BASE_URL"] = self.base_url + if self.console_id: + updates["UNIFI_NETWORK_CONSOLE_ID"] = self.console_id + return updates + + +def discover_site_profiles(*, load_files: bool = True) -> List[UniFiSiteProfile]: + if load_files: + load_env() + + indexes = sorted( + {int(match.group(1)) for key in os.environ for match in [re.match(r"UNIFI_SITE(\d+)_", key)] if match} + ) + profiles: List[UniFiSiteProfile] = [] + for index in indexes: + prefix = f"UNIFI_SITE{index}_" + api_key = os.getenv(f"{prefix}API_KEY", "") + console_id = os.getenv(f"{prefix}CONSOLE_ID", "") + base_url = os.getenv(f"{prefix}BASE_URL", "") + if not api_key or not (console_id or base_url): + continue + profiles.append( + UniFiSiteProfile( + key=f"site{index}", + name=os.getenv(f"{prefix}NAME", f"site{index}"), + api_key=api_key, + site_id=os.getenv(f"{prefix}SITE_ID", "default"), + console_id=console_id, + base_url=base_url, + verify_ssl=os.getenv(f"{prefix}VERIFY_SSL", os.getenv("UNIFI_VERIFY_SSL", "0")), + ) + ) + return profiles + + +def profile_by_key(profiles: Iterable[UniFiSiteProfile], selector: str) -> UniFiSiteProfile | None: + wanted = selector.strip().lower() + for profile in profiles: + if wanted in {profile.key.lower(), profile.name.lower(), profile.safe_name.lower()}: + return profile + return None + diff --git a/unifi/report.py b/unifi/report.py index 05b810e..d0ce375 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -90,6 +90,23 @@ def _read_site_file(source: Path, site_summary: Dict[str, Any], key: str) -> Lis return _items(_load_json(source / rel, [])) if rel else [] +def _auth_guidance(sm: Dict[str, Any], net: Dict[str, Any]) -> List[str]: + guidance: List[str] = [] + for error in list(sm.get("errors") or []) + list(net.get("errors") or []): + if not isinstance(error, dict) or error.get("status") not in {401, 403}: + continue + label = str(error.get("label") or "") + if label == "network_sites" and net.get("connectionType") == "remote": + guidance.append( + "Remote connector returned authorization failure. Use a cloud/account API key with console access, or switch this profile to local Network Integration collection with UNIFI_NETWORK_BASE_URL." + ) + elif label == "network_sites": + guidance.append("Local Network Integration API returned authorization failure. Confirm the key was created in this UniFi Network application and has read access.") + elif label == "site_manager_sites": + guidance.append("Site Manager returned authorization failure. Use a Site Manager/API key from the UniFi account API area, not a local Network Integration key.") + return sorted(set(guidance)) + + def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: source = Path(source_dir) output = Path(output_dir) @@ -146,6 +163,10 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: error_rows = [[e.get("label", ""), e.get("status", ""), e.get("path", ""), e.get("error", "")[:180]] for e in errors] sections.append("

    Endpoint Gaps / Errors

    ") sections.append(_table(["Endpoint", "Status", "Path", "Error"], error_rows, "No endpoint errors captured.")) + auth_guidance = _auth_guidance(sm, net) + if auth_guidance: + sections.append("

    Credential / Access Fix

    ") + sections.append("
      " + "".join(f"
    • {html.escape(item)}
    • " for item in auth_guidance) + "
    ") sections.append("") sections.append("

    Device Inventory

    ") @@ -322,4 +343,3 @@ def main(argv: List[str] | None = None) -> int: if __name__ == "__main__": raise SystemExit(main()) - diff --git a/unifi/run.sh b/unifi/run.sh index 9b5e5c7..0a62f13 100755 --- a/unifi/run.sh +++ b/unifi/run.sh @@ -12,6 +12,8 @@ usage() { echo " --site-id Limit local Network Application collection to one site ID" echo " --console-id " echo " Use api.ui.com remote connector for this console ID" + echo " --all-sites Run every UNIFI_SITE_* profile from unifi/.env" + echo " --profile With --all-sites, run one saved profile, e.g. site1" echo " --report-only Skip API collection; build report from unifi/backups/latest" echo " --backups-dir " echo " Backup JSON directory. Default: unifi/backups/latest" @@ -36,6 +38,8 @@ HEALTH_CHECK=0 KEEP_HTML=0 SITE_ID="${UNIFI_SITE_ID:-}" CONSOLE_ID="${UNIFI_NETWORK_CONSOLE_ID:-}" +ALL_SITES=0 +PROFILE="" BACKUPS_DIR="unifi/backups/latest" REPORTS_DIR="unifi/reports/latest" @@ -65,6 +69,20 @@ while [[ $# -gt 0 ]]; do fi shift 2 ;; + --all-sites) + ALL_SITES=1 + BACKUPS_DIR="unifi/backups/sites" + REPORTS_DIR="unifi/reports/sites" + shift + ;; + --profile) + PROFILE="${2:-}" + if [[ -z "$PROFILE" || "$PROFILE" == --* ]]; then + echo "Missing value for $1" >&2 + exit 2 + fi + shift 2 + ;; --report-only) REPORT_ONLY=1 shift @@ -133,6 +151,21 @@ echo "Mode: $MODE" echo "Backups: $BACKUPS_DIR" echo "Reports: $REPORTS_DIR" +if (( ALL_SITES == 1 )); then + multi_args=(--mode network --backups-dir "$BACKUPS_DIR" --reports-dir "$REPORTS_DIR") + if [[ -n "$PROFILE" ]]; then + multi_args+=(--profile "$PROFILE") + fi + if (( REPORT_ONLY == 1 )); then + multi_args+=(--report-only) + fi + if (( KEEP_HTML == 0 )); then + multi_args+=(--pdf-only) + fi + "$PYTHON_BIN" -m unifi.run_sites "${multi_args[@]}" + exit $? +fi + if (( HEALTH_CHECK == 1 )); then health_args=(--backups-dir "$BACKUPS_DIR") if (( REPORT_ONLY == 1 )); then diff --git a/unifi/run_sites.py b/unifi/run_sites.py new file mode 100644 index 0000000..005c602 --- /dev/null +++ b/unifi/run_sites.py @@ -0,0 +1,147 @@ +#!/usr/bin/env python3 +import argparse +import json +import os +import subprocess +import sys +from contextlib import contextmanager +from pathlib import Path +from typing import Dict, Iterator, List + +from . import collect, report +from .profiles import UniFiSiteProfile, discover_site_profiles, profile_by_key + + +ROOT = Path(__file__).resolve().parents[1] + + +@contextmanager +def _profile_environment(profile: UniFiSiteProfile) -> Iterator[None]: + updates = profile.env_updates() + clears = [ + "UNIFI_NETWORK_BASE_URL", + "UNIFI_BASE_URL", + "UNIFI_NETWORK_CONSOLE_ID", + "UNIFI_SITE_MANAGER_API_KEY", + "UNIFI_API_KEY", + ] + previous: Dict[str, str | None] = {key: os.environ.get(key) for key in set(clears) | set(updates)} + try: + for key in clears: + os.environ.pop(key, None) + os.environ.update(updates) + yield + finally: + for key, value in previous.items(): + if value is None: + os.environ.pop(key, None) + else: + os.environ[key] = value + + +def _run_inventory(backups_dir: Path, reports_dir: Path) -> int: + return subprocess.run( + [ + sys.executable, + "-m", + "unifi.inventory", + "--backups-dir", + str(backups_dir), + "--reports-dir", + str(reports_dir), + ], + check=False, + ).returncode + + +def _run_one(profile: UniFiSiteProfile, args: argparse.Namespace) -> Dict[str, object]: + backups_dir = Path(args.backups_dir) / profile.safe_name + reports_dir = Path(args.reports_dir) / profile.safe_name + result: Dict[str, object] = { + "profile": profile.key, + "name": profile.name, + "safeName": profile.safe_name, + "backupsDir": str(backups_dir), + "reportsDir": str(reports_dir), + "collectionStatus": "skipped" if args.report_only else "pending", + "reportStatus": "pending", + } + + print("") + print(f"=== UniFi profile: {profile.name} ({profile.key}) ===") + print(f"Backups: {backups_dir}") + print(f"Reports: {reports_dir}") + + with _profile_environment(profile): + collect_status = 0 + if args.report_only: + print("Collection skipped by --report-only") + if not (backups_dir / "collection_summary.json").exists(): + result["reportStatus"] = "missing_backup" + print(f"Missing backup summary: {backups_dir / 'collection_summary.json'}") + return result + else: + collect_args = ["--mode", args.mode, "--site-id", profile.site_id, "--output-dir", str(backups_dir)] + if profile.console_id and not profile.base_url: + collect_args.extend(["--console-id", profile.console_id]) + collect_status = collect.main(collect_args) + result["collectionStatus"] = "ok" if collect_status == 0 else "failed" + + if (backups_dir / "collection_summary.json").exists(): + try: + paths = report.build_report(str(backups_dir), str(reports_dir)) + if args.pdf_only and paths.get("pdf"): + try: + Path(str(paths["html"])).unlink() + except FileNotFoundError: + pass + inventory_status = _run_inventory(backups_dir, reports_dir) + result["reportStatus"] = "ok" if inventory_status == 0 else "inventory_failed" + result["report"] = paths + except Exception as exc: + result["reportStatus"] = "failed" + result["error"] = str(exc) + else: + result["reportStatus"] = "missing_backup" + + if collect_status != 0: + result["failed"] = True + if result.get("reportStatus") != "ok": + result["failed"] = True + return result + + +def main(argv: List[str] | None = None) -> int: + parser = argparse.ArgumentParser(description="Run UniFi collection/reporting for saved site profiles.") + parser.add_argument("--mode", choices=["network"], default="network") + parser.add_argument("--profile", default="", help="Run one profile by key/name, for example site1") + parser.add_argument("--report-only", action="store_true") + parser.add_argument("--backups-dir", default=str(ROOT / "unifi" / "backups" / "sites")) + parser.add_argument("--reports-dir", default=str(ROOT / "unifi" / "reports" / "sites")) + parser.add_argument("--pdf-only", action="store_true") + args = parser.parse_args(argv) + + profiles = discover_site_profiles() + if args.profile: + selected = profile_by_key(profiles, args.profile) + profiles = [selected] if selected else [] + if not profiles: + print("No saved UniFi site profiles found. Add UNIFI_SITE1_API_KEY plus UNIFI_SITE1_BASE_URL or UNIFI_SITE1_CONSOLE_ID in unifi/.env.", file=sys.stderr) + return 1 + + results = [_run_one(profile, args) for profile in profiles] + manifest = { + "profiles": results, + "ok": not any(result.get("failed") for result in results), + } + reports_root = Path(args.reports_dir) + reports_root.mkdir(parents=True, exist_ok=True) + (reports_root / "site_run_manifest.json").write_text(json.dumps(manifest, indent=2), encoding="utf-8") + print("") + print(f"Site run manifest: {reports_root / 'site_run_manifest.json'}") + return 0 if manifest["ok"] else 1 + + +if __name__ == "__main__": + raise SystemExit(main()) + From d20ff4874e460346bfee40487ad29e1ddf5b95d7 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:25:19 -0400 Subject: [PATCH 21/47] Improve UniFi local report parsing --- tests/test_unifi_report.py | 19 ++++++++- unifi/collect.py | 10 ++--- unifi/report.py | 87 ++++++++++++++++++++++++++++++++------ 3 files changed, 97 insertions(+), 19 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 007d3a5..6a29d9a 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -40,6 +40,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): json.dumps( [ {"name": "U7-Pro-1", "model": "U7-Pro", "type": "access point", "state": "ONLINE", "ipAddress": "10.1.1.10"}, + {"name": "IW HD", "model": "IW HD", "features": ["switching", "accessPoint"], "state": "ONLINE", "ipAddress": "10.1.1.11"}, {"name": "USW-48", "model": "USW-Pro-48-PoE", "type": "switch", "state": "ONLINE", "ipAddress": "10.1.1.20"}, ] ), @@ -47,7 +48,20 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): ) (site_dir / "clients.json").write_text(json.dumps([{"hostname": "client-1", "ipAddress": "10.10.0.50"}]), encoding="utf-8") (site_dir / "networks.json").write_text(json.dumps([{"name": "Staff", "vlanId": 100, "subnet": "10.100.0.0/16", "dhcpMode": "server"}]), encoding="utf-8") - (site_dir / "wifi.json").write_text(json.dumps([{"name": "Staff WiFi", "enabled": True, "security": "WPA3"}]), encoding="utf-8") + (site_dir / "wifi.json").write_text( + json.dumps( + [ + { + "name": "Staff WiFi", + "enabled": True, + "securityConfiguration": {"type": "WPA3"}, + "network": {"type": "NATIVE"}, + "broadcastingFrequenciesGHz": [2.4, 5], + } + ] + ), + encoding="utf-8", + ) (site_dir / "firewall_zones.json").write_text(json.dumps([{"name": "Internal", "id": "zone-1"}]), encoding="utf-8") output = tmp_path / "report" @@ -56,8 +70,11 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): html = Path(paths["html"]).read_text(encoding="utf-8") assert "TM UniFi Baseline" in html assert "U7-Pro-1" in html + assert "IW HD" in html assert "USW-48" in html assert "Staff WiFi" in html + assert "WPA3" in html + assert "NATIVE" in html assert "Firewall Zones" in html diff --git a/unifi/collect.py b/unifi/collect.py index dfda534..9b5c6f4 100644 --- a/unifi/collect.py +++ b/unifi/collect.py @@ -185,16 +185,16 @@ def collect_network_application(output: Path, selected_site_id: str = "", consol ("devices", "devices"), ("clients", "clients"), ("networks", "networks"), - ("wifi", "wifi"), + ("wifi", "wifi/broadcasts"), ("hotspot_vouchers", "hotspot/vouchers"), ("firewall_zones", "firewall/zones"), ("firewall_policies", "firewall/policies"), ("acl_rules", "acl-rules"), - ("traffic_lists", "traffic-lists"), + ("traffic_lists", "traffic-matching-lists"), ("wans", "wans"), - ("vpn_servers", "vpn-servers"), - ("vpn_tunnels", "vpn-tunnels"), - ("radius", "radius"), + ("vpn_servers", "vpn/servers"), + ("vpn_tunnels", "vpn/tunnels"), + ("radius", "radius/profiles"), ("dns_policies", "dns/policies"), ) diff --git a/unifi/report.py b/unifi/report.py index d0ce375..709e0fd 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -46,13 +46,14 @@ def _nested(item: Dict[str, Any], path: Iterable[str], default: str = "") -> str def _device_role(device: Dict[str, Any]) -> str: + features = {str(feature).lower() for feature in device.get("features", []) if feature} raw = " ".join(str(device.get(k, "")) for k in ("type", "model", "modelName", "name", "displayName")).lower() - if any(token in raw for token in ("access point", "uap", "u7", "u6", "ap ")): + if "accesspoint" in features or any(token in raw for token in ("access point", "uap", "u7", "u6", "ap ", "ac pro", "iw hd")): return "Access Point" - if any(token in raw for token in ("switch", "usw")): - return "Switch" if any(token in raw for token in ("gateway", "udm", "uxg", "ucg", "router")): return "Gateway" + if "switching" in features or any(token in raw for token in ("switch", "usw")): + return "Switch" return _first(device, ("type", "productLine", "category"), "Device") @@ -90,6 +91,48 @@ def _read_site_file(source: Path, site_summary: Dict[str, Any], key: str) -> Lis return _items(_load_json(source / rel, [])) if rel else [] +def _action_label(policy: Dict[str, Any]) -> str: + action = policy.get("action") + if isinstance(action, dict): + label = str(action.get("type") or "") + if action.get("allowReturnTraffic") is True: + label = f"{label} (return allowed)" if label else "return allowed" + return label + return str(action or "") + + +def _zone_label(value: Any, zone_names: Dict[str, str]) -> str: + if isinstance(value, dict): + zone_id = str(value.get("zoneId") or "") + if zone_id: + return zone_names.get(zone_id, zone_id) + traffic = value.get("trafficFilter") + if isinstance(traffic, dict): + return str(traffic.get("type") or "traffic filter") + return str(value or "") + + +def _wifi_network_label(wlan: Dict[str, Any]) -> str: + network = wlan.get("network") + if isinstance(network, dict): + return str(network.get("name") or network.get("id") or network.get("type") or "") + return _first(wlan, ("networkId", "networkName", "vlanId")) + + +def _wifi_security_label(wlan: Dict[str, Any]) -> str: + security = wlan.get("securityConfiguration") + if isinstance(security, dict): + return str(security.get("type") or security.get("authenticationType") or "") + return _first(wlan, ("securityProtocol", "security", "authMode")) + + +def _wifi_band_label(wlan: Dict[str, Any]) -> str: + bands = wlan.get("broadcastingFrequenciesGHz") + if isinstance(bands, list): + return ", ".join(str(band) for band in bands) + return _first(wlan, ("band", "apGroupIds")) + + def _auth_guidance(sm: Dict[str, Any], net: Dict[str, Any]) -> List[str]: guidance: List[str] = [] for error in list(sm.get("errors") or []) + list(net.get("errors") or []): @@ -197,13 +240,13 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: for netw in networks: rows.append([ _first(netw, ("name", "displayName")), - _first(netw, ("purpose", "type")), _first(netw, ("vlanId", "vlan", "vlan_id")), - _first(netw, ("subnet", "ipSubnet", "networkGroup")), - _first(netw, ("gatewayIp", "gateway", "dhcpRelayServer")), - _first(netw, ("dhcpMode", "dhcpEnabled", "dhcpd_enabled")), + _first(netw, ("enabled",)), + _first(netw, ("default",)), + _first(netw, ("management",)), + _first(netw, ("zoneId",)), ]) - sections.append(_table(["Network", "Purpose", "VLAN", "Subnet", "Gateway", "DHCP"], rows, "No network/VLAN endpoint data captured for this site.")) + sections.append(_table(["Network", "VLAN", "Enabled", "Default", "Management", "Zone ID"], rows, "No network/VLAN endpoint data captured for this site.")) if not site_summaries: sections.append("

    No local Network Application site detail captured yet.

    ") sections.append("
    ") @@ -216,9 +259,9 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: rows.append([ _first(wlan, ("name", "ssid")), _first(wlan, ("enabled", "isEnabled")), - _first(wlan, ("securityProtocol", "security", "authMode")), - _first(wlan, ("networkId", "networkName", "vlanId")), - _first(wlan, ("band", "apGroupIds")), + _wifi_security_label(wlan), + _wifi_network_label(wlan), + _wifi_band_label(wlan), ]) sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") sections.append(_table(["SSID", "Enabled", "Security", "Network / VLAN", "Band / AP Groups"], rows, "No WiFi endpoint data captured for this site.")) @@ -239,6 +282,8 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("

    Firewall and Policy Backup

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") + zones = _read_site_file(source, site, "firewall_zones") + zone_names = {str(zone.get("id")): str(zone.get("name") or zone.get("id")) for zone in zones if zone.get("id")} for key, label in ( ("firewall_zones", "Firewall Zones"), ("firewall_policies", "Firewall Policies"), @@ -247,9 +292,25 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: ("dns_policies", "DNS Policies"), ): data = _read_site_file(source, site, key) - rows = [[_first(item, ("name", "description", "id")), _first(item, ("enabled", "action", "type")), _first(item, ("id", "_id"))] for item in data[:100]] + if key == "firewall_policies": + rows = [ + [ + _first(item, ("index",)), + _first(item, ("name", "description", "id")), + _first(item, ("enabled",)), + _action_label(item), + _zone_label(item.get("source"), zone_names), + _zone_label(item.get("destination"), zone_names), + _first(item, ("loggingEnabled",)), + ] + for item in data[:120] + ] + headers = ["Order", "Name", "Enabled", "Action", "Source", "Destination", "Logging"] + else: + rows = [[_first(item, ("name", "description", "id")), _first(item, ("enabled", "action", "type")), _first(item, ("id", "_id"))] for item in data[:100]] + headers = ["Name", "State / Action", "ID"] sections.append(f"

    {html.escape(label)}

    ") - sections.append(_table(["Name", "State / Action", "ID"], rows, f"No {label.lower()} endpoint data captured.")) + sections.append(_table(headers, rows, f"No {label.lower()} endpoint data captured.")) sections.append("
    ") sections.append("

    Raw Backup Files

    ") From 0601a030e1b2acb4bc160d04e5d166b0b4175000 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:30:08 -0400 Subject: [PATCH 22/47] Improve UniFi endpoint and connectivity reporting --- tests/test_unifi_report.py | 25 +++++++++++++++++++++++++ unifi/collect.py | 20 ++++++++++++++++++++ unifi/report.py | 10 ++++++---- 3 files changed, 51 insertions(+), 4 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 6a29d9a..add06a3 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -116,3 +116,28 @@ def test_unifi_report_surfaces_remote_connector_auth_guidance(tmp_path: Path): assert "Credential / Access Fix" in html assert "cloud/account API key with console access" in html + + +def test_unifi_report_surfaces_local_connectivity_guidance(tmp_path: Path): + source = tmp_path / "backup" + source.mkdir() + (source / "collection_summary.json").write_text( + json.dumps( + { + "metadata": {"requestedMode": "network", "effectiveMode": "network"}, + "networkApplication": { + "enabled": True, + "connectionType": "local", + "errors": [{"label": "network_sites", "status": None, "path": "/local", "error": "timed out"}], + }, + } + ), + encoding="utf-8", + ) + output = tmp_path / "report" + paths = build_report(str(source), str(output)) + html = Path(paths["html"]).read_text(encoding="utf-8") + + assert "Credential / Access Fix" in html + assert "Local UniFi console could not be reached" in html + assert "UNIFI_NETWORK_BASE_URL" in html diff --git a/unifi/collect.py b/unifi/collect.py index 9b5c6f4..edd7331 100644 --- a/unifi/collect.py +++ b/unifi/collect.py @@ -130,6 +130,20 @@ def _fatal_auth_errors(summary: Dict[str, Any]) -> List[Dict[str, Any]]: return fatal +def _fatal_connectivity_errors(summary: Dict[str, Any]) -> List[Dict[str, Any]]: + fatal: List[Dict[str, Any]] = [] + for surface in ("siteManager", "networkApplication"): + payload = summary.get(surface) + if not isinstance(payload, dict) or not payload.get("enabled"): + continue + for error in payload.get("errors") or []: + if not isinstance(error, dict): + continue + if error.get("label") in {"site_manager_sites", "network_sites"} and error.get("status") is None: + fatal.append({"surface": surface, **error}) + return fatal + + def collect_network_application(output: Path, selected_site_id: str = "", console_id: str = "") -> Dict[str, Any]: api_key = os.getenv("UNIFI_NETWORK_API_KEY") or os.getenv("UNIFI_API_KEY") base_url = os.getenv("UNIFI_NETWORK_BASE_URL") or os.getenv("UNIFI_BASE_URL") @@ -279,6 +293,12 @@ def main(argv: List[str] | None = None) -> int: for err in fatal: print(f"- {err.get('surface')} {err.get('label')}: HTTP {err.get('status')}", file=sys.stderr) return 1 + fatal = _fatal_connectivity_errors(summary) + if fatal: + print("Fatal UniFi connectivity failure on required site-discovery endpoint.", file=sys.stderr) + for err in fatal: + print(f"- {err.get('surface')} {err.get('label')}: {err.get('error')}", file=sys.stderr) + return 1 return 0 diff --git a/unifi/report.py b/unifi/report.py index 709e0fd..5f1d586 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -136,17 +136,19 @@ def _wifi_band_label(wlan: Dict[str, Any]) -> str: def _auth_guidance(sm: Dict[str, Any], net: Dict[str, Any]) -> List[str]: guidance: List[str] = [] for error in list(sm.get("errors") or []) + list(net.get("errors") or []): - if not isinstance(error, dict) or error.get("status") not in {401, 403}: + if not isinstance(error, dict): continue label = str(error.get("label") or "") - if label == "network_sites" and net.get("connectionType") == "remote": + if error.get("status") in {401, 403} and label == "network_sites" and net.get("connectionType") == "remote": guidance.append( "Remote connector returned authorization failure. Use a cloud/account API key with console access, or switch this profile to local Network Integration collection with UNIFI_NETWORK_BASE_URL." ) - elif label == "network_sites": + elif error.get("status") in {401, 403} and label == "network_sites": guidance.append("Local Network Integration API returned authorization failure. Confirm the key was created in this UniFi Network application and has read access.") - elif label == "site_manager_sites": + elif error.get("status") in {401, 403} and label == "site_manager_sites": guidance.append("Site Manager returned authorization failure. Use a Site Manager/API key from the UniFi account API area, not a local Network Integration key.") + if error.get("status") is None and label == "network_sites": + guidance.append("Local UniFi console could not be reached. Verify VPN/LAN access to UNIFI_NETWORK_BASE_URL or use a cloud/account API key with remote connector access.") return sorted(set(guidance)) From 18aff29d3b2ff98c074e164370a357aa2a31d800 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:33:58 -0400 Subject: [PATCH 23/47] Classify unsupported UniFi optional endpoints --- tests/test_unifi_report.py | 58 ++++++++++++++++++++++++++++++++++++++ unifi/collect.py | 32 +++++++++++++++++++-- unifi/report.py | 5 ++++ 3 files changed, 92 insertions(+), 3 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index add06a3..5d9a566 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -1,6 +1,8 @@ import json from pathlib import Path +from unifi.client import UniFiRequestError +from unifi.collect import _call_list from unifi.report import build_report from unifi.profiles import discover_site_profiles @@ -141,3 +143,59 @@ def test_unifi_report_surfaces_local_connectivity_guidance(tmp_path: Path): assert "Credential / Access Fix" in html assert "Local UniFi console could not be reached" in html assert "UNIFI_NETWORK_BASE_URL" in html + + +def test_unifi_report_lists_optional_unsupported_endpoints(tmp_path: Path): + source = tmp_path / "backup" + source.mkdir() + (source / "collection_summary.json").write_text( + json.dumps( + { + "metadata": {"requestedMode": "network", "effectiveMode": "network"}, + "networkApplication": { + "enabled": True, + "errors": [], + "unsupportedEndpoints": [ + { + "label": "Default:vpn_tunnels", + "status": 404, + "path": "/vpn/tunnels", + "note": "This UniFi Network version does not expose VPN tunnel listing.", + } + ], + }, + } + ), + encoding="utf-8", + ) + output = tmp_path / "report" + paths = build_report(str(source), str(output)) + html = Path(paths["html"]).read_text(encoding="utf-8") + + assert "Optional API Coverage Notes" in html + assert "Default:vpn_tunnels" in html + assert "does not expose VPN tunnel listing" in html + + +def test_unifi_collect_treats_optional_404_as_unsupported(): + class MissingEndpointClient: + def paged_get(self, path, *, style): + raise UniFiRequestError("HTTP 404", status=404) + + errors = [] + unsupported = [] + + result = _call_list( + MissingEndpointClient(), + "/vpn/tunnels", + style="offset", + label="Default:vpn_tunnels", + errors=errors, + unsupported=unsupported, + optional_404_note="Not exposed by this controller.", + ) + + assert result == [] + assert errors == [] + assert unsupported[0]["label"] == "Default:vpn_tunnels" + assert unsupported[0]["note"] == "Not exposed by this controller." diff --git a/unifi/collect.py b/unifi/collect.py index edd7331..c628339 100644 --- a/unifi/collect.py +++ b/unifi/collect.py @@ -25,6 +25,9 @@ "note": "Cloud API for high-level host, site, device, ISP, and SD-WAN visibility.", }, ] +OPTIONAL_404_SITE_ENDPOINTS = { + "vpn_tunnels": "This UniFi Network version does not expose VPN tunnel listing through the Network Integration API.", +} def _write_json(path: Path, payload: Any) -> None: @@ -78,11 +81,24 @@ def _site_matches(site: Dict[str, Any], selector: str) -> bool: return wanted in {value.strip().lower() for value in values if value} -def _call_list(client: UniFiClient, path: str, *, style: str, label: str, errors: List[Dict[str, Any]]) -> List[Any]: +def _call_list( + client: UniFiClient, + path: str, + *, + style: str, + label: str, + errors: List[Dict[str, Any]], + unsupported: List[Dict[str, Any]] | None = None, + optional_404_note: str = "", +) -> List[Any]: try: return client.paged_get(path, style=style) except UniFiRequestError as exc: - errors.append({"label": label, "path": path, "status": exc.status, "error": str(exc)}) + record = {"label": label, "path": path, "status": exc.status, "error": str(exc)} + if exc.status == 404 and unsupported is not None and optional_404_note: + unsupported.append({**record, "note": optional_404_note}) + else: + errors.append(record) except Exception as exc: errors.append({"label": label, "path": path, "status": None, "error": str(exc)}) return [] @@ -169,6 +185,7 @@ def collect_network_application(output: Path, selected_site_id: str = "", consol verify_ssl=verify_ssl, ) errors: List[Dict[str, Any]] = [] + unsupported: List[Dict[str, Any]] = [] summary: Dict[str, Any] = { "enabled": True, "baseUrl": client.base_url, @@ -178,6 +195,7 @@ def collect_network_application(output: Path, selected_site_id: str = "", consol "files": {}, "counts": {}, "errors": errors, + "unsupportedEndpoints": unsupported, } try: @@ -222,7 +240,15 @@ def collect_network_application(output: Path, selected_site_id: str = "", consol site_summary: Dict[str, Any] = {"id": sid, "name": name, "files": {}, "counts": {}} for label, suffix in site_endpoints: path = f"{network_prefix}/sites/{sid}/{suffix}" - data = _call_list(client, path, style="offset", label=f"{name}:{label}", errors=errors) + data = _call_list( + client, + path, + style="offset", + label=f"{name}:{label}", + errors=errors, + unsupported=unsupported, + optional_404_note=OPTIONAL_404_SITE_ENDPOINTS.get(label, ""), + ) rel = f"sites/{safe}/{label}.json" _write_json(output / rel, data) site_summary["files"][label] = rel diff --git a/unifi/report.py b/unifi/report.py index 5f1d586..91c765c 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -205,9 +205,14 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: ] sections.append(_table(["Item", "Value"], rows)) errors = list(sm.get("errors") or []) + list(net.get("errors") or []) + unsupported = list(sm.get("unsupportedEndpoints") or []) + list(net.get("unsupportedEndpoints") or []) error_rows = [[e.get("label", ""), e.get("status", ""), e.get("path", ""), e.get("error", "")[:180]] for e in errors] sections.append("

    Endpoint Gaps / Errors

    ") sections.append(_table(["Endpoint", "Status", "Path", "Error"], error_rows, "No endpoint errors captured.")) + unsupported_rows = [[e.get("label", ""), e.get("status", ""), e.get("path", ""), e.get("note", "")] for e in unsupported] + if unsupported_rows: + sections.append("

    Optional API Coverage Notes

    ") + sections.append(_table(["Endpoint", "Status", "Path", "Note"], unsupported_rows)) auth_guidance = _auth_guidance(sm, net) if auth_guidance: sections.append("

    Credential / Access Fix

    ") From 8716f7a9da6e5de0b0a0e22c7b264b461eb07839 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:37:23 -0400 Subject: [PATCH 24/47] Add UniFi report inventory index --- ROADMAP.md | 23 ++++-- tests/test_unifi_inventory.py | 43 ++++++++++ unifi/README.md | 3 +- unifi/inventory.py | 150 ++++++++++++++++++++++++++++++---- 4 files changed, 196 insertions(+), 23 deletions(-) create mode 100644 tests/test_unifi_inventory.py diff --git a/ROADMAP.md b/ROADMAP.md index 98a1f43..c08022a 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -100,14 +100,21 @@ This project is currently functional as a Python reporting pipeline. The immedia ## Phase 6: UniFi / Ubiquiti Reporting - Started -- Add a separate `./unifi/run.sh` runner so UniFi work does not regress the - Meraki pipeline. -- Support both official Site Manager API collection and local UniFi Network - Application Integration API collection. -- Save raw UniFi JSON backups separately under `unifi/backups/`. -- Generate a first-pass UniFi baseline report under `unifi/reports/`. -- Treat local Network Application endpoint gaps as reportable coverage findings - while we learn the exact controller version and API surface. +- ~~Add a separate `./unifi/run.sh` runner so UniFi work does not regress the + Meraki pipeline.~~ +- ~~Support both official Site Manager API collection and local UniFi Network + Application Integration API collection.~~ +- ~~Save raw UniFi JSON backups separately under `unifi/backups/`.~~ +- ~~Generate a first-pass UniFi baseline report under `unifi/reports/`.~~ +- ~~Treat local Network Application endpoint gaps as reportable coverage + findings while we learn the exact controller version and API surface.~~ +- ~~Add saved site profiles in `unifi/.env` and `./unifi/run.sh --all-sites` + for multi-site runs.~~ +- ~~Write UniFi report inventory data and a static `index.html` for generated + outputs.~~ +- Improve UniFi executive summary language once more live sites are captured. +- Add deeper UniFi switch/AP port and radio telemetry when the controller API + exposes it. ## Release Checklist diff --git a/tests/test_unifi_inventory.py b/tests/test_unifi_inventory.py new file mode 100644 index 0000000..7c60885 --- /dev/null +++ b/tests/test_unifi_inventory.py @@ -0,0 +1,43 @@ +import json +from pathlib import Path + +from unifi import inventory + + +def test_unifi_inventory_requires_pdf_and_writes_index(tmp_path: Path): + backups = tmp_path / "backups" + reports = tmp_path / "reports" + backups.mkdir() + reports.mkdir() + (backups / "collection_summary.json").write_text("{}", encoding="utf-8") + (reports / "report.pdf").write_bytes(b"%PDF-1.4\n") + + assert inventory.main(["--backups-dir", str(backups), "--reports-dir", str(reports)]) == 0 + + manifest = json.loads((reports / "report_inventory.json").read_text(encoding="utf-8")) + index = (reports / "index.html").read_text(encoding="utf-8") + + assert manifest["ok"] is True + assert {item["label"]: item["ok"] for item in manifest["items"]}["report_pdf"] is True + assert {item["label"]: item["required"] for item in manifest["items"]}["report_html"] is False + assert "TM UniFi Report Inventory" in index + assert "report.pdf" in index + assert "collection_summary.json" in index + + +def test_unifi_inventory_fails_missing_pdf(tmp_path: Path): + backups = tmp_path / "backups" + reports = tmp_path / "reports" + backups.mkdir() + reports.mkdir() + (backups / "collection_summary.json").write_text("{}", encoding="utf-8") + + assert inventory.main(["--backups-dir", str(backups), "--reports-dir", str(reports)]) == 1 + + manifest = json.loads((reports / "report_inventory.json").read_text(encoding="utf-8")) + items = {item["label"]: item for item in manifest["items"]} + + assert manifest["ok"] is False + assert items["report_pdf"]["required"] is True + assert items["report_pdf"]["ok"] is False + assert (reports / "index.html").exists() diff --git a/unifi/README.md b/unifi/README.md index a8c3bb1..0acc4ca 100644 --- a/unifi/README.md +++ b/unifi/README.md @@ -67,7 +67,8 @@ available on a given controller. Outputs are written to: - `unifi/backups/latest/` for raw JSON backups -- `unifi/reports/latest/` for `report.pdf`, `report.html`, and inventory data +- `unifi/reports/latest/` for `report.pdf`, optional `report.html`, + `report_inventory.json`, and `index.html` When `--all-sites` is used, outputs are separated by saved profile: diff --git a/unifi/inventory.py b/unifi/inventory.py index cc2ceac..e023b8e 100644 --- a/unifi/inventory.py +++ b/unifi/inventory.py @@ -1,6 +1,9 @@ #!/usr/bin/env python3 import argparse +import html import json +import os +from datetime import datetime, timezone from pathlib import Path from typing import Dict, List @@ -8,38 +11,157 @@ ROOT = Path(__file__).resolve().parents[1] -def main() -> int: - parser = argparse.ArgumentParser(description="Validate UniFi report outputs.") - parser.add_argument("--reports-dir", default=str(ROOT / "unifi" / "reports" / "latest")) - parser.add_argument("--backups-dir", default=str(ROOT / "unifi" / "backups" / "latest")) - args = parser.parse_args() +def _size(path: Path) -> int: + return path.stat().st_size if path.exists() else 0 - reports = Path(args.reports_dir) - backups = Path(args.backups_dir) + +def _fmt_size(size: int) -> str: + if size >= 1024 * 1024: + return f"{size / (1024 * 1024):.1f} MB" + if size >= 1024: + return f"{size / 1024:.1f} KB" + return f"{size} B" + + +def _relative_href(path: Path, base: Path) -> str: + try: + rel = os.path.relpath(path.resolve(), base.resolve()) + except OSError: + rel = str(path) + return html.escape(Path(rel).as_posix(), quote=True) + + +def build_manifest(backups: Path, reports: Path) -> Dict[str, object]: checks = [ ("collection_summary", backups / "collection_summary.json", True), + ("report_pdf", reports / "report.pdf", True), ("report_html", reports / "report.html", False), - ("report_pdf", reports / "report.pdf", False), ] items: List[Dict[str, object]] = [] failed = False for label, path, required in checks: exists = path.exists() - size = path.stat().st_size if exists else 0 + size = _size(path) ok = exists and size > 0 if required and not ok: failed = True - items.append({"label": label, "path": str(path), "exists": exists, "size": size, "required": required, "ok": ok}) + items.append( + { + "label": label, + "path": str(path), + "exists": exists, + "size": size, + "required": required, + "ok": ok, + } + ) + + return { + "generatedAt": datetime.now(timezone.utc).isoformat(), + "backupsDir": str(backups), + "reportsDir": str(reports), + "items": items, + "ok": not failed, + } + - manifest = {"items": items, "ok": not failed} +def write_index(manifest: Dict[str, object], reports: Path) -> Path: + items = [item for item in manifest.get("items", []) if isinstance(item, dict)] + rows = [] + for item in items: + path = Path(str(item.get("path") or "")) + ok = bool(item.get("ok")) + exists = bool(item.get("exists")) + required = bool(item.get("required")) + status = "OK" if ok else ("Missing" if required else "Optional") + status_class = "ok" if ok else ("missing" if required else "optional") + label = html.escape(str(item.get("label") or "")) + size = _fmt_size(int(item.get("size") or 0)) if exists else "-" + if exists: + link = f'{html.escape(path.name)}' + else: + link = html.escape(path.name) + rows.append( + "" + f"{label}" + f"{html.escape(status)}" + f"{link}" + f"{html.escape(size)}" + "" + ) + + status_text = "OK" if manifest.get("ok") else "Missing required output" + generated = html.escape(str(manifest.get("generatedAt") or "")) + manifest_link = 'report_inventory.json' + body = f""" + + + + + TM UniFi Report Inventory + + + +
    +
    +

    TM UniFi Report Inventory

    +
    + Status: {html.escape(status_text)} + Generated: {generated} + Manifest: {manifest_link} +
    +
    +
    +

    Generated UniFi backup and report deliverables for this run.

    + + + {''.join(rows)} +
    DeliverableStatusFileSize
    +
    +
    + + +""" + target = reports / "index.html" + target.write_text(body, encoding="utf-8") + return target + + +def main(argv: List[str] | None = None) -> int: + parser = argparse.ArgumentParser(description="Validate UniFi report outputs.") + parser.add_argument("--reports-dir", default=str(ROOT / "unifi" / "reports" / "latest")) + parser.add_argument("--backups-dir", default=str(ROOT / "unifi" / "backups" / "latest")) + args = parser.parse_args(argv) + + reports = Path(args.reports_dir) + backups = Path(args.backups_dir) reports.mkdir(parents=True, exist_ok=True) + manifest = build_manifest(backups, reports) (reports / "report_inventory.json").write_text(json.dumps(manifest, indent=2), encoding="utf-8") - for item in items: + index_path = write_index(manifest, reports) + for item in manifest["items"]: status = "OK" if item["ok"] else ("MISS" if item["required"] else "optional") print(f"{status} {item['label']}: {item['path']}") - return 1 if failed else 0 + print(f"Index: {index_path}") + return 0 if manifest["ok"] else 1 if __name__ == "__main__": raise SystemExit(main()) - From cfc42b34732a8311053a03343e965eb3f67b8b2c Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:44:02 -0400 Subject: [PATCH 25/47] Enhance UniFi executive summary --- ROADMAP.md | 2 +- tests/test_unifi_report.py | 22 +++- unifi/report.py | 238 ++++++++++++++++++++++++++++++++++--- 3 files changed, 239 insertions(+), 23 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index c08022a..49db3e0 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -112,7 +112,7 @@ This project is currently functional as a Python reporting pipeline. The immedia for multi-site runs.~~ - ~~Write UniFi report inventory data and a static `index.html` for generated outputs.~~ -- Improve UniFi executive summary language once more live sites are captured. +- ~~Improve UniFi executive summary language once more live sites are captured.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 5d9a566..6c84573 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -32,7 +32,9 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): "networks": "sites/Main/networks.json", "wifi": "sites/Main/wifi.json", "firewall_zones": "sites/Main/firewall_zones.json", + "firewall_policies": "sites/Main/firewall_policies.json", }, + "counts": {"devices": 3, "clients": 1, "networks": 1, "wifi": 1, "firewall_zones": 1, "firewall_policies": 1}, } ] ), @@ -41,15 +43,21 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): (site_dir / "devices.json").write_text( json.dumps( [ - {"name": "U7-Pro-1", "model": "U7-Pro", "type": "access point", "state": "ONLINE", "ipAddress": "10.1.1.10"}, + {"id": "ap-1", "name": "U7-Pro-1", "model": "U7-Pro", "type": "access point", "state": "ONLINE", "ipAddress": "10.1.1.10"}, {"name": "IW HD", "model": "IW HD", "features": ["switching", "accessPoint"], "state": "ONLINE", "ipAddress": "10.1.1.11"}, {"name": "USW-48", "model": "USW-Pro-48-PoE", "type": "switch", "state": "ONLINE", "ipAddress": "10.1.1.20"}, ] ), encoding="utf-8", ) - (site_dir / "clients.json").write_text(json.dumps([{"hostname": "client-1", "ipAddress": "10.10.0.50"}]), encoding="utf-8") - (site_dir / "networks.json").write_text(json.dumps([{"name": "Staff", "vlanId": 100, "subnet": "10.100.0.0/16", "dhcpMode": "server"}]), encoding="utf-8") + (site_dir / "clients.json").write_text( + json.dumps([{"hostname": "client-1", "type": "WIRELESS", "ipAddress": "10.10.0.50", "uplinkDeviceId": "ap-1", "access": {"type": "DEFAULT"}}]), + encoding="utf-8", + ) + (site_dir / "networks.json").write_text( + json.dumps([{"name": "Staff", "vlanId": 100, "subnet": "10.100.0.0/16", "dhcpMode": "server", "zoneId": "zone-1", "metadata": {"origin": "USER_DEFINED"}}]), + encoding="utf-8", + ) (site_dir / "wifi.json").write_text( json.dumps( [ @@ -65,6 +73,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): encoding="utf-8", ) (site_dir / "firewall_zones.json").write_text(json.dumps([{"name": "Internal", "id": "zone-1"}]), encoding="utf-8") + (site_dir / "firewall_policies.json").write_text(json.dumps([{"name": "Allow Staff", "enabled": True, "action": {"type": "ALLOW"}}]), encoding="utf-8") output = tmp_path / "report" paths = build_report(str(source), str(output)) @@ -78,6 +87,13 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "WPA3" in html assert "NATIVE" in html assert "Firewall Zones" in html + assert "Recommended Follow-Up" in html + assert "By Model" in html + assert "Client Load by Uplink" in html + assert "Firewall Policy Summary" in html + assert "Internal" in html + assert "U7-Pro-1 (U7-Pro)" in html + assert "Firmware" in html def test_unifi_profiles_discovers_numbered_site_profiles(monkeypatch): diff --git a/unifi/report.py b/unifi/report.py index 91c765c..06ed21f 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -45,6 +45,18 @@ def _nested(item: Dict[str, Any], path: Iterable[str], default: str = "") -> str return str(cur) if cur not in (None, "") else default +def _as_bool(value: Any) -> bool: + if isinstance(value, bool): + return value + if isinstance(value, str): + return value.strip().lower() in {"1", "true", "yes", "on"} + return bool(value) + + +def _yes_no(value: Any) -> str: + return "yes" if _as_bool(value) else "no" + + def _device_role(device: Dict[str, Any]) -> str: features = {str(feature).lower() for feature in device.get("features", []) if feature} raw = " ".join(str(device.get(k, "")) for k in ("type", "model", "modelName", "name", "displayName")).lower() @@ -57,10 +69,23 @@ def _device_role(device: Dict[str, Any]) -> str: return _first(device, ("type", "productLine", "category"), "Device") +def _device_name(device: Dict[str, Any]) -> str: + return _first(device, ("name", "displayName", "hostname", "id"), _nested(device, ("meta", "name"), "Unknown device")) + + +def _device_model(device: Dict[str, Any]) -> str: + uidb = device.get("uidb") if isinstance(device.get("uidb"), dict) else {} + return _first(device, ("model", "modelName"), _first(uidb, ("model", "name"), "Unknown model")) + + def _status(device: Dict[str, Any]) -> str: return _first(device, ("state", "status", "connectionState", "adoptionState"), "unknown") +def _is_online(device: Dict[str, Any]) -> bool: + return _status(device).strip().lower() in {"online", "connected", "active", "up"} + + def _count_by(items: Iterable[Dict[str, Any]], fn) -> Dict[str, int]: counts: Dict[str, int] = {} for item in items: @@ -69,6 +94,23 @@ def _count_by(items: Iterable[Dict[str, Any]], fn) -> Dict[str, int]: return dict(sorted(counts.items(), key=lambda kv: (-kv[1], kv[0]))) +def _fmt_counts(counts: Dict[str, int]) -> str: + return ", ".join(f"{key}: {value}" for key, value in counts.items()) if counts else "none" + + +def _plural(count: int, singular: str, plural: str | None = None) -> str: + word = singular if count == 1 else (plural or f"{singular}s") + return f"{count} {word}" + + +def _model_rows(devices: Iterable[Dict[str, Any]]) -> List[List[Any]]: + counts: Dict[tuple[str, str], int] = {} + for device in devices: + key = (_device_model(device), _device_role(device)) + counts[key] = counts.get(key, 0) + 1 + return [[model, role, count] for (model, role), count in sorted(counts.items(), key=lambda kv: (-kv[1], kv[0][0], kv[0][1]))] + + def _table(headers: List[str], rows: List[List[Any]], empty: str = "No data captured.") -> str: if not rows: return f"

    {html.escape(empty)}

    " @@ -133,6 +175,35 @@ def _wifi_band_label(wlan: Dict[str, Any]) -> str: return _first(wlan, ("band", "apGroupIds")) +def _access_label(client: Dict[str, Any]) -> str: + access = client.get("access") + if isinstance(access, dict): + return str(access.get("type") or "") + return str(access or "") + + +def _build_device_name_map(devices: Iterable[Dict[str, Any]]) -> Dict[str, str]: + names: Dict[str, str] = {} + for device in devices: + label = f"{_device_name(device)} ({_device_model(device)})" + for key in ("id", "macAddress", "mac"): + value = device.get(key) + if value: + names[str(value)] = label + return names + + +def _client_uplink_label(client: Dict[str, Any], device_names: Dict[str, str]) -> str: + uplink = _first(client, ("uplinkDeviceId", "uplinkDeviceMac", "uplinkDeviceName")) + return device_names.get(uplink, uplink) + + +def _surface_state(surface: Dict[str, Any]) -> str: + if surface.get("enabled"): + return "enabled" + return f"not used: {surface.get('reason') or 'not configured'}" + + def _auth_guidance(sm: Dict[str, Any], net: Dict[str, Any]) -> List[str]: guidance: List[str] = [] for error in list(sm.get("errors") or []) + list(net.get("errors") or []): @@ -152,6 +223,61 @@ def _auth_guidance(sm: Dict[str, Any], net: Dict[str, Any]) -> List[str]: return sorted(set(guidance)) +def _executive_followups( + *, + all_devices: List[Dict[str, Any]], + all_clients: List[Dict[str, Any]], + site_summaries: List[Dict[str, Any]], + errors: List[Dict[str, Any]], + unsupported: List[Dict[str, Any]], + role_counts: Dict[str, int], + client_counts: Dict[str, int], + firewall_policy_count: int, + enabled_firewall_policy_count: int, + network_count: int, + wifi_count: int, +) -> List[str]: + followups: List[str] = [] + offline = [_device_name(device) for device in all_devices if not _is_online(device)] + updatable = [_device_name(device) for device in all_devices if _as_bool(device.get("firmwareUpdatable"))] + + if offline: + followups.append(f"Validate offline inventory before migration planning: {', '.join(offline[:6])}.") + else: + followups.append("All captured UniFi devices report online in the latest backup.") + + if updatable: + followups.append(f"Review available firmware updates for: {', '.join(updatable[:6])}.") + else: + followups.append("No captured UniFi devices are currently flagged as firmware-updatable by the controller.") + + if role_counts.get("Access Point", 0) and all_clients: + followups.append(f"Wireless client load is visible in this backup ({_fmt_counts(client_counts)}), giving an initial input for AP replacement and capacity planning.") + elif role_counts.get("Access Point", 0): + followups.append("AP inventory is captured, but client detail is missing; confirm client endpoint access before using the report for wireless capacity planning.") + + if network_count: + followups.append(f"Network backup includes {_plural(network_count, 'VLAN/network definition')} and {_plural(wifi_count, 'WiFi broadcast definition')}.") + else: + followups.append("No VLAN/network endpoint data was captured; validate Network Application API permissions.") + + if firewall_policy_count: + followups.append(f"Firewall backup includes {enabled_firewall_policy_count} enabled policies out of {_plural(firewall_policy_count, 'captured policy', 'captured policies')}.") + else: + followups.append("No firewall policies were captured; validate security policy endpoint access before treating this as a disaster-recovery backup.") + + if errors: + followups.append(f"Resolve {_plural(len(errors), 'collection error')} listed in Collection Coverage.") + if unsupported: + if len(unsupported) == 1: + followups.append("1 optional endpoint is not exposed by this controller version; it is documented as a coverage note.") + else: + followups.append(f"{len(unsupported)} optional endpoints are not exposed by this controller version; they are documented as coverage notes.") + if not site_summaries: + followups.append("Only cloud-level data was captured; use local Network Application credentials for site-scoped configuration backup.") + return followups + + def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: source = Path(source_dir) output = Path(output_dir) @@ -176,6 +302,18 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: role_counts = _count_by(all_devices, _device_role) status_counts = _count_by(all_devices, _status) + client_counts = _count_by(all_clients, lambda client: _first(client, ("type", "connectionType"), "Unknown")) + all_site_counts = [site.get("counts") for site in site_summaries if isinstance(site.get("counts"), dict)] + network_count = sum(int(counts.get("networks") or 0) for counts in all_site_counts) + wifi_count = sum(int(counts.get("wifi") or 0) for counts in all_site_counts) + firewall_zone_count = sum(int(counts.get("firewall_zones") or 0) for counts in all_site_counts) + firewall_policy_count = sum(int(counts.get("firewall_policies") or 0) for counts in all_site_counts) + enabled_firewall_policy_count = 0 + for site in site_summaries: + enabled_firewall_policy_count += sum(1 for policy in _read_site_file(source, site, "firewall_policies") if _as_bool(policy.get("enabled"))) + errors = list(sm.get("errors") or []) + list(net.get("errors") or []) + unsupported = list(sm.get("unsupportedEndpoints") or []) + list(net.get("unsupportedEndpoints") or []) + device_names = _build_device_name_map(all_devices) cards = [ ("Sites", len(site_summaries) or len(sm_sites)), ("Devices", len(all_devices)), @@ -183,29 +321,70 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: ("Switches", role_counts.get("Switch", 0)), ("APs", role_counts.get("Access Point", 0)), ("Gateways", role_counts.get("Gateway", 0)), + ("Networks", network_count), + ("WiFi", wifi_count), + ("Firewall Policies", firewall_policy_count), ] sections: List[str] = [] sections.append("

    Executive Summary

    ") sections.append(_summary_cards(cards)) - guidance = [ - "This first UniFi report is intentionally coverage-oriented: it proves API access, preserves raw JSON backups, and surfaces what the controller exposes for inventory, clients, networks, WiFi, and security policy.", - "If local Network Application credentials are available, this report should become the primary disaster-recovery and migration source because it captures site-scoped configuration instead of only cloud-level status.", - "Endpoint failures are listed explicitly so we can refine the collector against the exact UniFi Network version without losing the data that was available.", + site_rows = [] + for site in site_summaries: + counts = site.get("counts") if isinstance(site.get("counts"), dict) else {} + site_name = str(site.get("name") or site.get("id") or "Site") + site_coverage_notes = sum(1 for item in unsupported if str(item.get("label") or "").startswith(f"{site_name}:")) + if not site_coverage_notes and len(site_summaries) == 1: + site_coverage_notes = len(unsupported) + site_rows.append( + [ + site_name, + counts.get("devices", 0), + counts.get("clients", 0), + counts.get("networks", 0), + counts.get("wifi", 0), + counts.get("firewall_policies", 0), + site_coverage_notes, + ] + ) + sections.append("

    Site Capture Summary

    ") + sections.append(_table(["Site", "Devices", "Clients", "Networks", "WiFi", "Firewall Policies", "Coverage Notes"], site_rows, "No local site detail captured.")) + summary_rows = [ + ["Inventory", f"{len(all_devices)} devices captured ({_fmt_counts(role_counts)})."], + ["Clients", f"{len(all_clients)} clients captured ({_fmt_counts(client_counts)})."], + [ + "Configuration backup", + f"{_plural(network_count, 'network/VLAN', 'networks/VLANs')}, {_plural(wifi_count, 'WiFi broadcast')}, {_plural(firewall_zone_count, 'firewall zone')}, and {_plural(firewall_policy_count, 'firewall policy', 'firewall policies')} captured.", + ], + ["Collection coverage", f"{_plural(len(errors), 'hard endpoint error')}; {_plural(len(unsupported), 'optional endpoint coverage note')}."], ] - sections.append("
      " + "".join(f"
    • {html.escape(x)}
    • " for x in guidance) + "
    ") + sections.append("

    What This Run Captured

    ") + sections.append(_table(["Area", "Summary"], summary_rows)) + followups = _executive_followups( + all_devices=all_devices, + all_clients=all_clients, + site_summaries=site_summaries, + errors=errors, + unsupported=unsupported, + role_counts=role_counts, + client_counts=client_counts, + firewall_policy_count=firewall_policy_count, + enabled_firewall_policy_count=enabled_firewall_policy_count, + network_count=network_count, + wifi_count=wifi_count, + ) + sections.append("

    Recommended Follow-Up

    ") + sections.append("
      " + "".join(f"
    • {html.escape(item)}
    • " for item in followups) + "
    ") sections.append("

    Collection Coverage

    ") rows = [ ["Requested mode", metadata.get("requestedMode", "")], ["Effective mode", metadata.get("effectiveMode", "")], ["Collected at", metadata.get("collectedAt", "")], - ["Site Manager", "enabled" if sm.get("enabled") else f"not used: {sm.get('reason', '')}"], - ["Network Application", "enabled" if net.get("enabled") else f"not used: {net.get('reason', '')}"], + ["Site Manager", _surface_state(sm)], + ["Network Application", _surface_state(net)], ] sections.append(_table(["Item", "Value"], rows)) - errors = list(sm.get("errors") or []) + list(net.get("errors") or []) - unsupported = list(sm.get("unsupportedEndpoints") or []) + list(net.get("unsupportedEndpoints") or []) error_rows = [[e.get("label", ""), e.get("status", ""), e.get("path", ""), e.get("error", "")[:180]] for e in errors] sections.append("

    Endpoint Gaps / Errors

    ") sections.append(_table(["Endpoint", "Status", "Path", "Error"], error_rows, "No endpoint errors captured.")) @@ -224,36 +403,42 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: status_rows = [[k, v] for k, v in status_counts.items()] sections.append("

    By Role

    " + _table(["Role", "Count"], role_rows) + "
    ") sections.append("

    By Status

    " + _table(["Status", "Count"], status_rows) + "
    ") + sections.append("

    By Model

    ") + sections.append(_table(["Model", "Role", "Count"], _model_rows(all_devices), "No device model data captured.")) device_rows = [] for dev in all_devices[:300]: - uidb = dev.get("uidb") if isinstance(dev.get("uidb"), dict) else {} device_rows.append([ - _first(dev, ("name", "displayName", "hostname"), _nested(dev, ("meta", "name"), "")), + _device_name(dev), _device_role(dev), - _first(dev, ("model", "modelName"), _first(uidb, ("model", "name"), "")), + _device_model(dev), _status(dev), + _yes_no(dev.get("firmwareUpdatable")), _first(dev, ("ipAddress", "ip", "lastIp"), ""), _first(dev, ("macAddress", "mac", "id"), ""), _first(dev, ("version", "firmwareVersion"), ""), ]) - sections.append(_table(["Name", "Role", "Model", "Status", "IP", "MAC / ID", "Firmware"], device_rows)) + sections.append(_table(["Name", "Role", "Model", "Status", "Update", "IP", "MAC / ID", "Firmware"], device_rows)) sections.append("
    ") sections.append("

    Sites, Networks, VLANs, and DHCP

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or site.get('id') or 'Site'))}

    ") networks = _read_site_file(source, site, "networks") + zones = _read_site_file(source, site, "firewall_zones") + zone_names = {str(zone.get("id")): str(zone.get("name") or zone.get("id")) for zone in zones if zone.get("id")} rows = [] for netw in networks: + metadata_payload = netw.get("metadata") if isinstance(netw.get("metadata"), dict) else {} rows.append([ _first(netw, ("name", "displayName")), _first(netw, ("vlanId", "vlan", "vlan_id")), - _first(netw, ("enabled",)), - _first(netw, ("default",)), + _yes_no(netw.get("enabled")), + _yes_no(netw.get("default")), _first(netw, ("management",)), - _first(netw, ("zoneId",)), + zone_names.get(str(netw.get("zoneId") or ""), _first(netw, ("zoneId",))), + _first(metadata_payload, ("origin",)), ]) - sections.append(_table(["Network", "VLAN", "Enabled", "Default", "Management", "Zone ID"], rows, "No network/VLAN endpoint data captured for this site.")) + sections.append(_table(["Network", "VLAN", "Enabled", "Default", "Management", "Zone", "Origin"], rows, "No network/VLAN endpoint data captured for this site.")) if not site_summaries: sections.append("

    No local Network Application site detail captured yet.

    ") sections.append("
    ") @@ -272,6 +457,10 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: ]) sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") sections.append(_table(["SSID", "Enabled", "Security", "Network / VLAN", "Band / AP Groups"], rows, "No WiFi endpoint data captured for this site.")) + uplink_rows = [[uplink, count] for uplink, count in _count_by(all_clients, lambda client: _client_uplink_label(client, device_names) or "Unknown").items()] + if uplink_rows: + sections.append("

    Client Load by Uplink

    ") + sections.append(_table(["Uplink Device", "Clients"], uplink_rows)) client_rows = [] for client in all_clients[:300]: client_rows.append([ @@ -279,11 +468,12 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: _first(client, ("type", "connectionType")), _first(client, ("ipAddress", "ip")), _first(client, ("macAddress", "mac", "id")), - _first(client, ("networkName", "vlanId", "networkId")), + _client_uplink_label(client, device_names), + _access_label(client), _first(client, ("connectedAt", "lastSeen")), ]) sections.append("

    Connected Clients

    ") - sections.append(_table(["Name", "Type", "IP", "MAC / ID", "Network / VLAN", "Seen"], client_rows, "No client detail captured.")) + sections.append(_table(["Name", "Type", "IP", "MAC / ID", "Uplink Device", "Access", "Seen"], client_rows, "No client detail captured.")) sections.append("") sections.append("

    Firewall and Policy Backup

    ") @@ -291,6 +481,16 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") zones = _read_site_file(source, site, "firewall_zones") zone_names = {str(zone.get("id")): str(zone.get("name") or zone.get("id")) for zone in zones if zone.get("id")} + policies = _read_site_file(source, site, "firewall_policies") + if policies: + policy_action_rows = [[action, count] for action, count in _count_by(policies, _action_label).items()] + policy_enabled_rows = [ + ["Enabled", sum(1 for policy in policies if _as_bool(policy.get("enabled")))], + ["Disabled", sum(1 for policy in policies if not _as_bool(policy.get("enabled")))], + ] + sections.append("

    Firewall Policy Summary

    ") + sections.append("
    " + _table(["Action", "Count"], policy_action_rows) + "
    ") + sections.append("
    " + _table(["State", "Count"], policy_enabled_rows) + "
    ") for key, label in ( ("firewall_zones", "Firewall Zones"), ("firewall_policies", "Firewall Policies"), From a9af45c5213dc6d11650186886e63aec5e90cdd7 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:46:39 -0400 Subject: [PATCH 26/47] Document UniFi interface telemetry coverage --- ROADMAP.md | 2 ++ tests/test_unifi_report.py | 14 ++++++++++---- unifi/report.py | 39 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 51 insertions(+), 4 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index 49db3e0..575f974 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -113,6 +113,8 @@ This project is currently functional as a Python reporting pipeline. The immedia - ~~Write UniFi report inventory data and a static `index.html` for generated outputs.~~ - ~~Improve UniFi executive summary language once more live sites are captured.~~ +- ~~Document UniFi interface telemetry coverage so reports distinguish advertised + port/radio capability flags from detailed per-port/per-radio metrics.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 6c84573..0cb429f 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -15,11 +15,12 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): json.dumps( { "metadata": {"requestedMode": "network", "effectiveMode": "network", "collectedAt": "2026-05-05T12:00:00"}, - "networkApplication": {"enabled": True, "files": {"site_summaries": "network_site_summaries.json"}, "errors": []}, + "networkApplication": {"enabled": True, "files": {"site_summaries": "network_site_summaries.json", "info": "network_info.json"}, "errors": []}, } ), encoding="utf-8", ) + (source / "network_info.json").write_text(json.dumps({"applicationVersion": "10.3.58"}), encoding="utf-8") (source / "network_site_summaries.json").write_text( json.dumps( [ @@ -43,9 +44,9 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): (site_dir / "devices.json").write_text( json.dumps( [ - {"id": "ap-1", "name": "U7-Pro-1", "model": "U7-Pro", "type": "access point", "state": "ONLINE", "ipAddress": "10.1.1.10"}, - {"name": "IW HD", "model": "IW HD", "features": ["switching", "accessPoint"], "state": "ONLINE", "ipAddress": "10.1.1.11"}, - {"name": "USW-48", "model": "USW-Pro-48-PoE", "type": "switch", "state": "ONLINE", "ipAddress": "10.1.1.20"}, + {"id": "ap-1", "name": "U7-Pro-1", "model": "U7-Pro", "type": "access point", "state": "ONLINE", "ipAddress": "10.1.1.10", "interfaces": ["ports", "radios"], "features": ["accessPoint"]}, + {"name": "IW HD", "model": "IW HD", "features": ["switching", "accessPoint"], "interfaces": ["ports", "radios"], "state": "ONLINE", "ipAddress": "10.1.1.11"}, + {"name": "USW-48", "model": "USW-Pro-48-PoE", "type": "switch", "state": "ONLINE", "ipAddress": "10.1.1.20", "interfaces": ["ports"], "features": ["switching"]}, ] ), encoding="utf-8", @@ -94,6 +95,11 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Internal" in html assert "U7-Pro-1 (U7-Pro)" in html assert "Firmware" in html + assert "Network Application version" in html + assert "10.3.58" in html + assert "Interface Telemetry Coverage" in html + assert "ports, radios" in html + assert "capability flag only" in html def test_unifi_profiles_discovers_numbered_site_profiles(monkeypatch): diff --git a/unifi/report.py b/unifi/report.py index 06ed21f..0253e46 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -111,6 +111,36 @@ def _model_rows(devices: Iterable[Dict[str, Any]]) -> List[List[Any]]: return [[model, role, count] for (model, role), count in sorted(counts.items(), key=lambda kv: (-kv[1], kv[0][0], kv[0][1]))] +def _string_list(value: Any) -> List[str]: + if isinstance(value, list): + return [str(item) for item in value if item not in (None, "")] + if value not in (None, ""): + return [str(value)] + return [] + + +def _join_list(value: Any) -> str: + return ", ".join(_string_list(value)) + + +def _interface_summary_rows(devices: Iterable[Dict[str, Any]]) -> List[List[Any]]: + counts: Dict[str, int] = {} + for device in devices: + for interface in set(_string_list(device.get("interfaces"))): + counts[interface] = counts.get(interface, 0) + 1 + return [[name, count] for name, count in sorted(counts.items(), key=lambda kv: (-kv[1], kv[0]))] + + +def _interface_device_rows(devices: Iterable[Dict[str, Any]]) -> List[List[Any]]: + rows: List[List[Any]] = [] + for device in devices: + interfaces = _join_list(device.get("interfaces")) + features = _join_list(device.get("features")) + detail = "capability flag only" if interfaces else "not advertised" + rows.append([_device_name(device), _device_model(device), features, interfaces, detail]) + return rows + + def _table(headers: List[str], rows: List[List[Any]], empty: str = "No data captured.") -> str: if not rows: return f"

    {html.escape(empty)}

    " @@ -287,6 +317,9 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sm = summary.get("siteManager") if isinstance(summary.get("siteManager"), dict) else {} net = summary.get("networkApplication") if isinstance(summary.get("networkApplication"), dict) else {} metadata = summary.get("metadata") if isinstance(summary.get("metadata"), dict) else {} + network_info = _load_json(source / str((net.get("files") or {}).get("info", "network_info.json")), {}) + if not isinstance(network_info, dict): + network_info = {} sm_sites = _items(_load_json(source / str((sm.get("files") or {}).get("sites", "")), [])) if sm.get("files") else [] sm_devices = _items(_load_json(source / str((sm.get("files") or {}).get("devices", "")), [])) if sm.get("files") else [] @@ -381,6 +414,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: ["Requested mode", metadata.get("requestedMode", "")], ["Effective mode", metadata.get("effectiveMode", "")], ["Collected at", metadata.get("collectedAt", "")], + ["Network Application version", network_info.get("applicationVersion", "")], ["Site Manager", _surface_state(sm)], ["Network Application", _surface_state(net)], ] @@ -405,6 +439,11 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("

    By Status

    " + _table(["Status", "Count"], status_rows) + "
    ") sections.append("

    By Model

    ") sections.append(_table(["Model", "Role", "Count"], _model_rows(all_devices), "No device model data captured.")) + sections.append("

    Interface Telemetry Coverage

    ") + sections.append("

    UniFi Network reports interface capability flags in this backup. Per-port and per-radio utilization metrics are not present in the captured Network Integration payloads.

    ") + sections.append("

    Advertised Interfaces

    " + _table(["Interface", "Devices"], _interface_summary_rows(all_devices), "No interface capability flags captured.") + "
    ") + sections.append("

    Telemetry Status

    " + _table(["Metric", "Status"], [["Port detail", "not present in backup"], ["Radio detail", "not present in backup"], ["Client uplink mapping", "captured"]]) + "
    ") + sections.append(_table(["Device", "Model", "Features", "Interfaces", "Detail"], _interface_device_rows(all_devices), "No device interface coverage captured.")) device_rows = [] for dev in all_devices[:300]: device_rows.append([ From b9b5ac9b145c0442dda41e157e9a70b54502d336 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:51:07 -0400 Subject: [PATCH 27/47] Probe UniFi telemetry endpoint coverage --- ROADMAP.md | 2 + tests/test_unifi_report.py | 39 +++++++++++++- unifi/README.md | 2 + unifi/collect.py | 103 +++++++++++++++++++++++++++++++++++++ unifi/report.py | 56 +++++++++++++++++++- 5 files changed, 199 insertions(+), 3 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index 575f974..661e9e5 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -115,6 +115,8 @@ This project is currently functional as a Python reporting pipeline. The immedia - ~~Improve UniFi executive summary language once more live sites are captured.~~ - ~~Document UniFi interface telemetry coverage so reports distinguish advertised port/radio capability flags from detailed per-port/per-radio metrics.~~ +- ~~Probe likely UniFi port/radio telemetry endpoints during collection and save + structured coverage evidence in the backup/report.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 0cb429f..8e1639e 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -2,7 +2,7 @@ from pathlib import Path from unifi.client import UniFiRequestError -from unifi.collect import _call_list +from unifi.collect import _call_list, _collect_telemetry_probes from unifi.report import build_report from unifi.profiles import discover_site_profiles @@ -34,6 +34,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): "wifi": "sites/Main/wifi.json", "firewall_zones": "sites/Main/firewall_zones.json", "firewall_policies": "sites/Main/firewall_policies.json", + "telemetry_probe": "sites/Main/telemetry_probe.json", }, "counts": {"devices": 3, "clients": 1, "networks": 1, "wifi": 1, "firewall_zones": 1, "firewall_policies": 1}, } @@ -75,6 +76,15 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): ) (site_dir / "firewall_zones.json").write_text(json.dumps([{"name": "Internal", "id": "zone-1"}]), encoding="utf-8") (site_dir / "firewall_policies.json").write_text(json.dumps([{"name": "Allow Staff", "enabled": True, "action": {"type": "ALLOW"}}]), encoding="utf-8") + (site_dir / "telemetry_probe.json").write_text( + json.dumps( + [ + {"label": "site_ports", "purpose": "Per-site switch port telemetry", "path": "/ports", "available": False, "status": 404, "itemCount": 0}, + {"label": "wireless_radios", "purpose": "Wireless radio telemetry", "path": "/wireless/radios", "available": False, "status": 404, "itemCount": 0}, + ] + ), + encoding="utf-8", + ) output = tmp_path / "report" paths = build_report(str(source), str(output)) @@ -100,6 +110,9 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Interface Telemetry Coverage" in html assert "ports, radios" in html assert "capability flag only" in html + assert "API Telemetry Probe Results" in html + assert "site_ports" in html + assert "HTTP 404" in html def test_unifi_profiles_discovers_numbered_site_profiles(monkeypatch): @@ -221,3 +234,27 @@ def paged_get(self, path, *, style): assert errors == [] assert unsupported[0]["label"] == "Default:vpn_tunnels" assert unsupported[0]["note"] == "Not exposed by this controller." + + +def test_unifi_collect_telemetry_probe_records_available_and_missing_paths(tmp_path: Path): + class ProbeClient: + def get_json(self, path, params=None): + if path.endswith("/sites/site-1/ports"): + return {"data": [{"port": 1}, {"port": 2}]} + raise UniFiRequestError("HTTP 404", status=404) + + results = _collect_telemetry_probes( + ProbeClient(), + "/network", + "site-1", + "Main", + [{"id": "device-1", "interfaces": ["ports", "radios"]}], + tmp_path, + ) + by_label = {result["label"]: result for result in results} + + assert by_label["site_ports"]["available"] is True + assert by_label["site_ports"]["itemCount"] == 2 + assert (tmp_path / by_label["site_ports"]["file"]).exists() + assert by_label["site_radios"]["status"] == 404 + assert by_label["device_ports"]["path"].endswith("/devices/device-1/ports") diff --git a/unifi/README.md b/unifi/README.md index 0acc4ca..15b8733 100644 --- a/unifi/README.md +++ b/unifi/README.md @@ -67,6 +67,8 @@ available on a given controller. Outputs are written to: - `unifi/backups/latest/` for raw JSON backups +- `unifi/backups/latest/sites//telemetry_probe.json` for non-fatal + port/radio endpoint coverage probes - `unifi/reports/latest/` for `report.pdf`, optional `report.html`, `report_inventory.json`, and `index.html` diff --git a/unifi/collect.py b/unifi/collect.py index c628339..64cfecb 100644 --- a/unifi/collect.py +++ b/unifi/collect.py @@ -28,6 +28,19 @@ OPTIONAL_404_SITE_ENDPOINTS = { "vpn_tunnels": "This UniFi Network version does not expose VPN tunnel listing through the Network Integration API.", } +TELEMETRY_PROBES: Tuple[Dict[str, str], ...] = ( + {"label": "site_ports", "scope": "site", "suffix": "ports", "purpose": "Per-site switch port telemetry"}, + {"label": "site_radios", "scope": "site", "suffix": "radios", "purpose": "Per-site AP radio telemetry"}, + {"label": "site_interfaces", "scope": "site", "suffix": "interfaces", "purpose": "Per-site interface telemetry"}, + {"label": "device_interfaces", "scope": "site", "suffix": "device-interfaces", "purpose": "Per-site device interface telemetry"}, + {"label": "switch_ports", "scope": "site", "suffix": "switch/ports", "purpose": "Switch port telemetry"}, + {"label": "wireless_radios", "scope": "site", "suffix": "wireless/radios", "purpose": "Wireless radio telemetry"}, + {"label": "wifi_radio_settings", "scope": "site", "suffix": "wifi/radio-settings", "purpose": "WiFi radio settings"}, + {"label": "wifi_rf_environments", "scope": "site", "suffix": "wifi/rf-environments", "purpose": "RF environment telemetry"}, + {"label": "wifi_channel_plans", "scope": "site", "suffix": "wifi/channel-plans", "purpose": "Channel plan telemetry"}, + {"label": "device_ports", "scope": "device", "interface": "ports", "suffix": "devices/{device_id}/ports", "purpose": "Per-device port telemetry"}, + {"label": "device_radios", "scope": "device", "interface": "radios", "suffix": "devices/{device_id}/radios", "purpose": "Per-device radio telemetry"}, +) def _write_json(path: Path, payload: Any) -> None: @@ -47,6 +60,11 @@ def _safe_name(value: str) -> str: return clean.strip("_") or "site" +def _safe_label(value: str) -> str: + clean = "".join(ch if ch.isalnum() or ch in "-_" else "_" for ch in value.strip()) + return clean.strip("_") or "item" + + def _items(payload: Any) -> List[Dict[str, Any]]: if isinstance(payload, list): return [x for x in payload if isinstance(x, dict)] @@ -81,6 +99,31 @@ def _site_matches(site: Dict[str, Any], selector: str) -> bool: return wanted in {value.strip().lower() for value in values if value} +def _device_with_interface(devices: Iterable[Dict[str, Any]], interface: str) -> Dict[str, Any] | None: + wanted = interface.strip().lower() + for device in devices: + interfaces = device.get("interfaces") + if not isinstance(interfaces, list): + continue + available = {str(item).strip().lower() for item in interfaces if item} + if wanted in available and (device.get("id") or device.get("_id")): + return device + return None + + +def _payload_count(payload: Any) -> int: + items = _items(payload) + if items: + return len(items) + if payload in (None, ""): + return 0 + if isinstance(payload, dict): + return 1 + if isinstance(payload, list): + return len(payload) + return 1 + + def _call_list( client: UniFiClient, path: str, @@ -104,6 +147,58 @@ def _call_list( return [] +def _probe_telemetry_endpoint(client: UniFiClient, path: str, *, label: str, purpose: str, output: Path, safe: str) -> Dict[str, Any]: + record: Dict[str, Any] = { + "label": label, + "purpose": purpose, + "path": path, + "available": False, + "status": None, + "itemCount": 0, + } + try: + payload = client.get_json(path, {"limit": 10, "offset": 0}) + except UniFiRequestError as exc: + record.update({"status": exc.status, "error": str(exc)}) + return record + except Exception as exc: + record.update({"error": str(exc)}) + return record + + rel = f"sites/{safe}/telemetry/{_safe_label(label)}.json" + _write_json(output / rel, payload) + record.update({"available": True, "status": 200, "itemCount": _payload_count(payload), "file": rel}) + return record + + +def _collect_telemetry_probes(client: UniFiClient, network_prefix: str, site_id: str, safe: str, devices: Iterable[Dict[str, Any]], output: Path) -> List[Dict[str, Any]]: + results: List[Dict[str, Any]] = [] + device_items = list(devices) + for probe in TELEMETRY_PROBES: + label = probe["label"] + suffix = probe["suffix"] + if probe.get("scope") == "device": + device = _device_with_interface(device_items, probe.get("interface", "")) + if not device: + results.append( + { + "label": label, + "purpose": probe.get("purpose", ""), + "path": "", + "available": False, + "status": None, + "itemCount": 0, + "note": f"No sampled device advertises {probe.get('interface')} interface capability.", + } + ) + continue + device_id = str(device.get("id") or device.get("_id")) + suffix = suffix.format(device_id=device_id) + path = f"{network_prefix}/sites/{site_id}/{suffix}" + results.append(_probe_telemetry_endpoint(client, path, label=label, purpose=probe.get("purpose", ""), output=output, safe=safe)) + return results + + def collect_site_manager(output: Path) -> Dict[str, Any]: api_key = os.getenv("UNIFI_SITE_MANAGER_API_KEY") or os.getenv("UNIFI_API_KEY") if not api_key: @@ -238,6 +333,7 @@ def collect_network_application(output: Path, selected_site_id: str = "", consol name = _site_name(site) safe = _safe_name(name or sid) site_summary: Dict[str, Any] = {"id": sid, "name": name, "files": {}, "counts": {}} + site_payloads: Dict[str, List[Any]] = {} for label, suffix in site_endpoints: path = f"{network_prefix}/sites/{sid}/{suffix}" data = _call_list( @@ -253,6 +349,13 @@ def collect_network_application(output: Path, selected_site_id: str = "", consol _write_json(output / rel, data) site_summary["files"][label] = rel site_summary["counts"][label] = len(data) + site_payloads[label] = data + telemetry_probe = _collect_telemetry_probes(client, network_prefix, sid, safe, _items(site_payloads.get("devices", [])), output) + telemetry_rel = f"sites/{safe}/telemetry_probe.json" + _write_json(output / telemetry_rel, telemetry_probe) + site_summary["files"]["telemetry_probe"] = telemetry_rel + site_summary["counts"]["telemetry_probe_available"] = sum(1 for result in telemetry_probe if result.get("available")) + site_summary["counts"]["telemetry_probe_total"] = len(telemetry_probe) site_summaries.append(site_summary) _write_json(output / "network_site_summaries.json", site_summaries) diff --git a/unifi/report.py b/unifi/report.py index 0253e46..9746ee1 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -141,6 +141,45 @@ def _interface_device_rows(devices: Iterable[Dict[str, Any]]) -> List[List[Any]] return rows +def _probe_status_label(probe: Dict[str, Any]) -> str: + if probe.get("available"): + return "available" + status = probe.get("status") + if status: + return f"HTTP {status}" + return "not probed" + + +def _probe_status_summary(probes: Iterable[Dict[str, Any]], terms: Iterable[str], fallback: str) -> str: + wanted = [term.lower() for term in terms] + relevant = [ + probe + for probe in probes + if any(term in str(probe.get("label") or "").lower() or term in str(probe.get("purpose") or "").lower() for term in wanted) + ] + if not relevant: + return fallback + if any(probe.get("available") for probe in relevant): + return "captured by API probe" + statuses = sorted({_probe_status_label(probe) for probe in relevant}) + return f"not exposed by probed endpoints ({', '.join(statuses)})" + + +def _probe_rows(probes: Iterable[Dict[str, Any]]) -> List[List[Any]]: + rows: List[List[Any]] = [] + for probe in probes: + rows.append( + [ + probe.get("label", ""), + _probe_status_label(probe), + _yes_no(probe.get("available")), + probe.get("itemCount", 0), + probe.get("purpose") or probe.get("note") or "", + ] + ) + return rows + + def _table(headers: List[str], rows: List[List[Any]], empty: str = "No data captured.") -> str: if not rows: return f"

    {html.escape(empty)}

    " @@ -327,9 +366,11 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: all_devices: List[Dict[str, Any]] = [] all_clients: List[Dict[str, Any]] = [] + telemetry_probes: List[Dict[str, Any]] = [] for site in site_summaries: all_devices.extend(_read_site_file(source, site, "devices")) all_clients.extend(_read_site_file(source, site, "clients")) + telemetry_probes.extend(_read_site_file(source, site, "telemetry_probe")) if not all_devices: all_devices = sm_devices @@ -440,10 +481,21 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("

    By Model

    ") sections.append(_table(["Model", "Role", "Count"], _model_rows(all_devices), "No device model data captured.")) sections.append("

    Interface Telemetry Coverage

    ") - sections.append("

    UniFi Network reports interface capability flags in this backup. Per-port and per-radio utilization metrics are not present in the captured Network Integration payloads.

    ") + if telemetry_probes: + sections.append("

    UniFi Network reports interface capability flags in this backup. API probe results below document whether detailed per-port and per-radio endpoints were exposed by this controller.

    ") + else: + sections.append("

    UniFi Network reports interface capability flags in this backup. Per-port and per-radio utilization metrics are not present in the captured Network Integration payloads.

    ") sections.append("

    Advertised Interfaces

    " + _table(["Interface", "Devices"], _interface_summary_rows(all_devices), "No interface capability flags captured.") + "
    ") - sections.append("

    Telemetry Status

    " + _table(["Metric", "Status"], [["Port detail", "not present in backup"], ["Radio detail", "not present in backup"], ["Client uplink mapping", "captured"]]) + "
    ") + telemetry_status_rows = [ + ["Port detail", _probe_status_summary(telemetry_probes, ("port", "ports"), "not present in backup")], + ["Radio detail", _probe_status_summary(telemetry_probes, ("radio", "radios", "rf"), "not present in backup")], + ["Client uplink mapping", "captured" if all_clients else "not present in backup"], + ] + sections.append("

    Telemetry Status

    " + _table(["Metric", "Status"], telemetry_status_rows) + "
    ") sections.append(_table(["Device", "Model", "Features", "Interfaces", "Detail"], _interface_device_rows(all_devices), "No device interface coverage captured.")) + if telemetry_probes: + sections.append("

    API Telemetry Probe Results

    ") + sections.append(_table(["Probe", "Status", "Available", "Items", "Purpose"], _probe_rows(telemetry_probes), "No telemetry probes captured.")) device_rows = [] for dev in all_devices[:300]: device_rows.append([ From fb9292bdf75443c454abdb0497454d0fb11dab2b Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:54:13 -0400 Subject: [PATCH 28/47] Add UniFi backup completeness matrix --- ROADMAP.md | 2 + tests/test_unifi_report.py | 31 +++++++++- unifi/report.py | 119 +++++++++++++++++++++++++++++++++++++ 3 files changed, 150 insertions(+), 2 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index 661e9e5..770318b 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -117,6 +117,8 @@ This project is currently functional as a Python reporting pipeline. The immedia port/radio capability flags from detailed per-port/per-radio metrics.~~ - ~~Probe likely UniFi port/radio telemetry endpoints during collection and save structured coverage evidence in the backup/report.~~ +- ~~Add a UniFi configuration backup completeness matrix showing captured, + captured-empty, and unsupported endpoint coverage.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 8e1639e..60f2e90 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -15,7 +15,14 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): json.dumps( { "metadata": {"requestedMode": "network", "effectiveMode": "network", "collectedAt": "2026-05-05T12:00:00"}, - "networkApplication": {"enabled": True, "files": {"site_summaries": "network_site_summaries.json", "info": "network_info.json"}, "errors": []}, + "networkApplication": { + "enabled": True, + "files": {"site_summaries": "network_site_summaries.json", "info": "network_info.json"}, + "errors": [], + "unsupportedEndpoints": [ + {"label": "Main:vpn_tunnels", "status": 404, "path": "/vpn/tunnels", "note": "Not exposed."} + ], + }, } ), encoding="utf-8", @@ -34,9 +41,22 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): "wifi": "sites/Main/wifi.json", "firewall_zones": "sites/Main/firewall_zones.json", "firewall_policies": "sites/Main/firewall_policies.json", + "dns_policies": "sites/Main/dns_policies.json", + "vpn_tunnels": "sites/Main/vpn_tunnels.json", "telemetry_probe": "sites/Main/telemetry_probe.json", }, - "counts": {"devices": 3, "clients": 1, "networks": 1, "wifi": 1, "firewall_zones": 1, "firewall_policies": 1}, + "counts": { + "devices": 3, + "clients": 1, + "networks": 1, + "wifi": 1, + "firewall_zones": 1, + "firewall_policies": 1, + "dns_policies": 0, + "vpn_tunnels": 0, + "telemetry_probe_available": 0, + "telemetry_probe_total": 2, + }, } ] ), @@ -76,6 +96,8 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): ) (site_dir / "firewall_zones.json").write_text(json.dumps([{"name": "Internal", "id": "zone-1"}]), encoding="utf-8") (site_dir / "firewall_policies.json").write_text(json.dumps([{"name": "Allow Staff", "enabled": True, "action": {"type": "ALLOW"}}]), encoding="utf-8") + (site_dir / "dns_policies.json").write_text(json.dumps([]), encoding="utf-8") + (site_dir / "vpn_tunnels.json").write_text(json.dumps([]), encoding="utf-8") (site_dir / "telemetry_probe.json").write_text( json.dumps( [ @@ -113,6 +135,11 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "API Telemetry Probe Results" in html assert "site_ports" in html assert "HTTP 404" in html + assert "Configuration Backup Completeness" in html + assert "Networks / VLANs" in html + assert "0 / 2 available" in html + assert "captured empty" in html + assert "not exposed (HTTP 404)" in html def test_unifi_profiles_discovers_numbered_site_profiles(monkeypatch): diff --git a/unifi/report.py b/unifi/report.py index 9746ee1..2230e7e 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -11,6 +11,40 @@ ROOT = Path(__file__).resolve().parents[1] +SITE_ENDPOINT_ORDER = [ + "devices", + "clients", + "networks", + "wifi", + "wans", + "firewall_zones", + "firewall_policies", + "acl_rules", + "traffic_lists", + "dns_policies", + "radius", + "hotspot_vouchers", + "vpn_servers", + "vpn_tunnels", + "telemetry_probe", +] +SITE_ENDPOINT_LABELS = { + "acl_rules": "ACL rules", + "clients": "Clients", + "devices": "Devices", + "dns_policies": "DNS policies", + "firewall_policies": "Firewall policies", + "firewall_zones": "Firewall zones", + "hotspot_vouchers": "Hotspot vouchers", + "networks": "Networks / VLANs", + "radius": "RADIUS profiles", + "telemetry_probe": "Telemetry probes", + "traffic_lists": "Traffic lists", + "vpn_servers": "VPN servers", + "vpn_tunnels": "VPN tunnels", + "wans": "WANs", + "wifi": "WiFi broadcasts", +} def _load_json(path: Path, default: Any) -> Any: @@ -180,6 +214,76 @@ def _probe_rows(probes: Iterable[Dict[str, Any]]) -> List[List[Any]]: return rows +def _site_endpoint_key(label: str, site_name: str) -> str: + prefix = f"{site_name}:" + if label.startswith(prefix): + return label[len(prefix) :] + if ":" in label: + return label.split(":", 1)[1] + return label + + +def _endpoint_issue_map(site_name: str, records: Iterable[Dict[str, Any]]) -> Dict[str, Dict[str, Any]]: + mapped: Dict[str, Dict[str, Any]] = {} + for record in records: + label = str(record.get("label") or "") + if ":" in label and not label.startswith(f"{site_name}:"): + continue + key = _site_endpoint_key(label, site_name) + if key: + mapped[key] = record + return mapped + + +def _site_file_keys(files: Dict[str, Any]) -> List[str]: + known = [key for key in SITE_ENDPOINT_ORDER if key in files] + extra = sorted(key for key in files if key not in SITE_ENDPOINT_ORDER) + return known + extra + + +def _backup_count_label(key: str, counts: Dict[str, Any]) -> str: + if key == "telemetry_probe": + total = counts.get("telemetry_probe_total") + available = counts.get("telemetry_probe_available") + if total is not None or available is not None: + return f"{available or 0} / {total or 0} available" + return str(counts.get(key, "")) + + +def _backup_status_label(key: str, count_label: str, error: Dict[str, Any] | None, unsupported: Dict[str, Any] | None) -> str: + if unsupported: + return f"not exposed (HTTP {unsupported.get('status')})" if unsupported.get("status") else "not exposed" + if error: + return f"error (HTTP {error.get('status')})" if error.get("status") else "error" + if key == "telemetry_probe": + return "probed" + try: + count = int(count_label) + except (TypeError, ValueError): + return "captured" if count_label else "unknown" + return "captured" if count > 0 else "captured empty" + + +def _backup_completeness_rows(site: Dict[str, Any], errors: Iterable[Dict[str, Any]], unsupported: Iterable[Dict[str, Any]]) -> List[List[Any]]: + files = site.get("files") if isinstance(site.get("files"), dict) else {} + counts = site.get("counts") if isinstance(site.get("counts"), dict) else {} + site_name = str(site.get("name") or site.get("id") or "Site") + error_map = _endpoint_issue_map(site_name, errors) + unsupported_map = _endpoint_issue_map(site_name, unsupported) + rows: List[List[Any]] = [] + for key in _site_file_keys(files): + count_label = _backup_count_label(key, counts) + rows.append( + [ + SITE_ENDPOINT_LABELS.get(key, key.replace("_", " ").title()), + count_label, + _backup_status_label(key, count_label, error_map.get(key), unsupported_map.get(key)), + files.get(key, ""), + ] + ) + return rows + + def _table(headers: List[str], rows: List[List[Any]], empty: str = "No data captured.") -> str: if not rows: return f"

    {html.escape(empty)}

    " @@ -473,6 +577,21 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("
      " + "".join(f"
    • {html.escape(item)}
    • " for item in auth_guidance) + "
    ") sections.append("") + sections.append("

    Configuration Backup Completeness

    ") + if site_summaries: + for site in site_summaries: + sections.append(f"

    {html.escape(str(site.get('name') or site.get('id') or 'Site'))}

    ") + sections.append( + _table( + ["Area", "Items", "Status", "Backup JSON"], + _backup_completeness_rows(site, errors, unsupported), + "No site-scoped backup files were captured.", + ) + ) + else: + sections.append("

    No local Network Application site backup detail captured.

    ") + sections.append("
    ") + sections.append("

    Device Inventory

    ") role_rows = [[k, v] for k, v in role_counts.items()] status_rows = [[k, v] for k, v in status_counts.items()] From cefd7a882031cfe0769443097d986adc67a15758 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:56:31 -0400 Subject: [PATCH 29/47] Add UniFi multi-site report index --- ROADMAP.md | 1 + tests/test_unifi_run_sites.py | 58 +++++++++++++++++++ unifi/README.md | 2 + unifi/run_sites.py | 103 +++++++++++++++++++++++++++++++++- 4 files changed, 163 insertions(+), 1 deletion(-) create mode 100644 tests/test_unifi_run_sites.py diff --git a/ROADMAP.md b/ROADMAP.md index 770318b..d8df813 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -110,6 +110,7 @@ This project is currently functional as a Python reporting pipeline. The immedia findings while we learn the exact controller version and API surface.~~ - ~~Add saved site profiles in `unifi/.env` and `./unifi/run.sh --all-sites` for multi-site runs.~~ +- ~~Write a top-level UniFi multi-site report index for saved profile runs.~~ - ~~Write UniFi report inventory data and a static `index.html` for generated outputs.~~ - ~~Improve UniFi executive summary language once more live sites are captured.~~ diff --git a/tests/test_unifi_run_sites.py b/tests/test_unifi_run_sites.py new file mode 100644 index 0000000..8222798 --- /dev/null +++ b/tests/test_unifi_run_sites.py @@ -0,0 +1,58 @@ +from datetime import datetime, timezone +from pathlib import Path + +from unifi.run_sites import build_site_index_html, write_site_index + + +def test_unifi_site_index_links_profile_reports(tmp_path: Path): + reports_root = tmp_path / "reports" + site_dir = reports_root / "First_Campus" + site_dir.mkdir(parents=True) + (site_dir / "report.pdf").write_bytes(b"%PDF-1.4\n") + (site_dir / "index.html").write_text("", encoding="utf-8") + manifest = { + "ok": True, + "profiles": [ + { + "profile": "site1", + "name": "First Campus", + "collectionStatus": "ok", + "reportStatus": "ok", + "reportsDir": str(site_dir), + } + ], + } + + html = build_site_index_html(manifest, reports_root, datetime(2026, 5, 5, tzinfo=timezone.utc)) + + assert "TM UniFi Site Reports" in html + assert "First Campus" in html + assert 'href="First_Campus/report.pdf"' in html + assert 'href="First_Campus/index.html"' in html + assert "site_run_manifest.json" in html + + +def test_unifi_site_index_marks_failed_profiles(tmp_path: Path): + reports_root = tmp_path / "reports" + reports_root.mkdir() + manifest = { + "ok": False, + "profiles": [ + { + "profile": "site2", + "name": "Second Campus", + "collectionStatus": "failed", + "reportStatus": "missing_backup", + "reportsDir": str(reports_root / "Second_Campus"), + } + ], + } + + index_path = write_site_index(manifest, reports_root) + html = index_path.read_text(encoding="utf-8") + + assert index_path == reports_root / "index.html" + assert "Needs attention" in html + assert "Second Campus" in html + assert "failed" in html + assert "missing_backup" in html diff --git a/unifi/README.md b/unifi/README.md index 15b8733..a718a0c 100644 --- a/unifi/README.md +++ b/unifi/README.md @@ -76,3 +76,5 @@ When `--all-sites` is used, outputs are separated by saved profile: - `unifi/backups/sites/site1/` - `unifi/reports/sites/site1/` +- `unifi/reports/sites/site_run_manifest.json` +- `unifi/reports/sites/index.html` diff --git a/unifi/run_sites.py b/unifi/run_sites.py index 005c602..71e3606 100644 --- a/unifi/run_sites.py +++ b/unifi/run_sites.py @@ -1,10 +1,12 @@ #!/usr/bin/env python3 import argparse +import html import json import os import subprocess import sys from contextlib import contextmanager +from datetime import datetime, timezone from pathlib import Path from typing import Dict, Iterator, List @@ -15,6 +17,104 @@ ROOT = Path(__file__).resolve().parents[1] +def _relative_href(path: Path, base: Path) -> str: + try: + rel = os.path.relpath(path.resolve(), base.resolve()) + except OSError: + rel = str(path) + return html.escape(Path(rel).as_posix(), quote=True) + + +def _status_badge(value: object) -> str: + raw = str(value or "unknown") + css = "ok" if raw == "ok" else ("warn" if raw in {"skipped", "missing_backup"} else "bad") + return f'{html.escape(raw)}' + + +def build_site_index_html(manifest: Dict[str, object], reports_root: Path, generated_at: datetime | None = None) -> str: + generated = generated_at or datetime.now(timezone.utc) + profiles = [profile for profile in manifest.get("profiles", []) if isinstance(profile, dict)] + rows = [] + for profile in profiles: + reports_dir = Path(str(profile.get("reportsDir") or "")) + report_pdf = reports_dir / "report.pdf" + profile_index = reports_dir / "index.html" + if report_pdf.exists(): + report_link = f'report.pdf' + else: + report_link = "report.pdf" + if profile_index.exists(): + inventory_link = f'index.html' + else: + inventory_link = "index.html" + rows.append( + "" + f"{html.escape(str(profile.get('name') or profile.get('profile') or ''))}" + f"{html.escape(str(profile.get('profile') or ''))}" + f"{_status_badge(profile.get('collectionStatus'))}" + f"{_status_badge(profile.get('reportStatus'))}" + f"{report_link}" + f"{inventory_link}" + "" + ) + + status = "OK" if manifest.get("ok") else "Needs attention" + status_class = "ok" if manifest.get("ok") else "bad" + manifest_link = 'site_run_manifest.json' + return f""" + + + + + TM UniFi Site Reports + + + +
    +
    +

    TM UniFi Site Reports

    +
    + Status: {html.escape(status)} + Generated: {html.escape(generated.isoformat())} + Manifest: {manifest_link} +
    +
    +
    +

    Saved UniFi profile report outputs for this run.

    + + + {''.join(rows)} +
    SiteProfileCollectionReportPDFInventory
    +
    +
    + + +""" + + +def write_site_index(manifest: Dict[str, object], reports_root: Path) -> Path: + target = reports_root / "index.html" + target.write_text(build_site_index_html(manifest, reports_root), encoding="utf-8") + return target + + @contextmanager def _profile_environment(profile: UniFiSiteProfile) -> Iterator[None]: updates = profile.env_updates() @@ -137,11 +237,12 @@ def main(argv: List[str] | None = None) -> int: reports_root = Path(args.reports_dir) reports_root.mkdir(parents=True, exist_ok=True) (reports_root / "site_run_manifest.json").write_text(json.dumps(manifest, indent=2), encoding="utf-8") + index_path = write_site_index(manifest, reports_root) print("") print(f"Site run manifest: {reports_root / 'site_run_manifest.json'}") + print(f"Site report index: {index_path}") return 0 if manifest["ok"] else 1 if __name__ == "__main__": raise SystemExit(main()) - From ad832c4c8076722d654f2bee4530823d056c6fb0 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Tue, 5 May 2026 23:59:08 -0400 Subject: [PATCH 30/47] Add UniFi site index metrics --- ROADMAP.md | 2 + tests/test_unifi_run_sites.py | 62 +++++++++++++++++++++++- unifi/README.md | 4 ++ unifi/run_sites.py | 90 ++++++++++++++++++++++++++++++++++- 4 files changed, 156 insertions(+), 2 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index d8df813..597622e 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -111,6 +111,8 @@ This project is currently functional as a Python reporting pipeline. The immedia - ~~Add saved site profiles in `unifi/.env` and `./unifi/run.sh --all-sites` for multi-site runs.~~ - ~~Write a top-level UniFi multi-site report index for saved profile runs.~~ +- ~~Add per-profile network size and coverage metrics to the UniFi multi-site + manifest/index.~~ - ~~Write UniFi report inventory data and a static `index.html` for generated outputs.~~ - ~~Improve UniFi executive summary language once more live sites are captured.~~ diff --git a/tests/test_unifi_run_sites.py b/tests/test_unifi_run_sites.py index 8222798..04f1145 100644 --- a/tests/test_unifi_run_sites.py +++ b/tests/test_unifi_run_sites.py @@ -1,7 +1,7 @@ from datetime import datetime, timezone from pathlib import Path -from unifi.run_sites import build_site_index_html, write_site_index +from unifi.run_sites import _profile_summary_metrics, build_site_index_html, write_site_index def test_unifi_site_index_links_profile_reports(tmp_path: Path): @@ -19,6 +19,17 @@ def test_unifi_site_index_links_profile_reports(tmp_path: Path): "collectionStatus": "ok", "reportStatus": "ok", "reportsDir": str(site_dir), + "summaryMetrics": { + "devices": 12, + "clients": 48, + "networks": 3, + "wifi": 2, + "firewallPolicies": 9, + "telemetryProbeAvailable": 1, + "telemetryProbeTotal": 4, + "endpointErrors": 0, + "unsupportedEndpoints": 1, + }, } ], } @@ -30,6 +41,11 @@ def test_unifi_site_index_links_profile_reports(tmp_path: Path): assert 'href="First_Campus/report.pdf"' in html assert 'href="First_Campus/index.html"' in html assert "site_run_manifest.json" in html + assert ">12<" in html + assert ">48<" in html + assert "3 net / 2 WiFi / 9 FW" in html + assert "1 / 4" in html + assert "0 errors / 1 notes" in html def test_unifi_site_index_marks_failed_profiles(tmp_path: Path): @@ -56,3 +72,47 @@ def test_unifi_site_index_marks_failed_profiles(tmp_path: Path): assert "Second Campus" in html assert "failed" in html assert "missing_backup" in html + + +def test_unifi_profile_summary_metrics_aggregates_collection_summary(tmp_path: Path): + backups = tmp_path / "backups" + backups.mkdir() + (backups / "network_info.json").write_text('{"applicationVersion":"10.3.58"}', encoding="utf-8") + (backups / "collection_summary.json").write_text( + """ + { + "siteManager": {"errors": [{"label": "site_manager_sites"}]}, + "networkApplication": { + "files": {"info": "network_info.json"}, + "counts": {"sites": 1}, + "errors": [], + "unsupportedEndpoints": [{"label": "Default:vpn_tunnels"}], + "siteSummaries": [ + { + "counts": { + "devices": 5, + "clients": 33, + "networks": 2, + "wifi": 1, + "firewall_policies": 63, + "firewall_zones": 6, + "telemetry_probe_available": 0, + "telemetry_probe_total": 11 + } + } + ] + } + } + """, + encoding="utf-8", + ) + + metrics = _profile_summary_metrics(backups) + + assert metrics["sites"] == 1 + assert metrics["devices"] == 5 + assert metrics["clients"] == 33 + assert metrics["firewallPolicies"] == 63 + assert metrics["endpointErrors"] == 1 + assert metrics["unsupportedEndpoints"] == 1 + assert metrics["networkVersion"] == "10.3.58" diff --git a/unifi/README.md b/unifi/README.md index a718a0c..2b6d6b6 100644 --- a/unifi/README.md +++ b/unifi/README.md @@ -78,3 +78,7 @@ When `--all-sites` is used, outputs are separated by saved profile: - `unifi/reports/sites/site1/` - `unifi/reports/sites/site_run_manifest.json` - `unifi/reports/sites/index.html` + +The multi-site index includes per-profile device/client counts, configuration +counts, telemetry probe availability, endpoint errors, and unsupported endpoint +notes when a collection summary is available. diff --git a/unifi/run_sites.py b/unifi/run_sites.py index 71e3606..4d59427 100644 --- a/unifi/run_sites.py +++ b/unifi/run_sites.py @@ -17,6 +17,13 @@ ROOT = Path(__file__).resolve().parents[1] +def _load_json(path: Path, default: object) -> object: + try: + return json.loads(path.read_text(encoding="utf-8")) + except Exception: + return default + + def _relative_href(path: Path, base: Path) -> str: try: rel = os.path.relpath(path.resolve(), base.resolve()) @@ -31,6 +38,81 @@ def _status_badge(value: object) -> str: return f'{html.escape(raw)}' +def _int(value: object) -> int: + try: + return int(value or 0) + except (TypeError, ValueError): + return 0 + + +def _profile_summary_metrics(backups_dir: Path) -> Dict[str, object]: + summary = _load_json(backups_dir / "collection_summary.json", {}) + if not isinstance(summary, dict): + return {} + net = summary.get("networkApplication") if isinstance(summary.get("networkApplication"), dict) else {} + sm = summary.get("siteManager") if isinstance(summary.get("siteManager"), dict) else {} + site_summaries = net.get("siteSummaries") if isinstance(net.get("siteSummaries"), list) else [] + aggregate = { + "sites": len(site_summaries) or _int((net.get("counts") or {}).get("sites") if isinstance(net.get("counts"), dict) else 0), + "devices": 0, + "clients": 0, + "networks": 0, + "wifi": 0, + "firewallPolicies": 0, + "firewallZones": 0, + "telemetryProbeAvailable": 0, + "telemetryProbeTotal": 0, + "endpointErrors": len(sm.get("errors") or []) + len(net.get("errors") or []), + "unsupportedEndpoints": len(sm.get("unsupportedEndpoints") or []) + len(net.get("unsupportedEndpoints") or []), + } + for site in site_summaries: + if not isinstance(site, dict) or not isinstance(site.get("counts"), dict): + continue + counts = site["counts"] + aggregate["devices"] += _int(counts.get("devices")) + aggregate["clients"] += _int(counts.get("clients")) + aggregate["networks"] += _int(counts.get("networks")) + aggregate["wifi"] += _int(counts.get("wifi")) + aggregate["firewallPolicies"] += _int(counts.get("firewall_policies")) + aggregate["firewallZones"] += _int(counts.get("firewall_zones")) + aggregate["telemetryProbeAvailable"] += _int(counts.get("telemetry_probe_available")) + aggregate["telemetryProbeTotal"] += _int(counts.get("telemetry_probe_total")) + + info_file = (net.get("files") or {}).get("info") if isinstance(net.get("files"), dict) else "" + if info_file: + info = _load_json(backups_dir / str(info_file), {}) + if isinstance(info, dict) and info.get("applicationVersion"): + aggregate["networkVersion"] = str(info.get("applicationVersion")) + return aggregate + + +def _metric_value(profile: Dict[str, object], key: str) -> str: + metrics = profile.get("summaryMetrics") if isinstance(profile.get("summaryMetrics"), dict) else {} + value = metrics.get(key) + return "" if value is None else str(value) + + +def _config_summary(profile: Dict[str, object]) -> str: + networks = _metric_value(profile, "networks") or "0" + wifi = _metric_value(profile, "wifi") or "0" + policies = _metric_value(profile, "firewallPolicies") or "0" + return f"{networks} net / {wifi} WiFi / {policies} FW" + + +def _telemetry_summary(profile: Dict[str, object]) -> str: + available = _metric_value(profile, "telemetryProbeAvailable") + total = _metric_value(profile, "telemetryProbeTotal") + if total: + return f"{available or '0'} / {total}" + return "" + + +def _coverage_summary(profile: Dict[str, object]) -> str: + errors = _metric_value(profile, "endpointErrors") or "0" + unsupported = _metric_value(profile, "unsupportedEndpoints") or "0" + return f"{errors} errors / {unsupported} notes" + + def build_site_index_html(manifest: Dict[str, object], reports_root: Path, generated_at: datetime | None = None) -> str: generated = generated_at or datetime.now(timezone.utc) profiles = [profile for profile in manifest.get("profiles", []) if isinstance(profile, dict)] @@ -53,6 +135,11 @@ def build_site_index_html(manifest: Dict[str, object], reports_root: Path, gener f"{html.escape(str(profile.get('profile') or ''))}" f"{_status_badge(profile.get('collectionStatus'))}" f"{_status_badge(profile.get('reportStatus'))}" + f"{html.escape(_metric_value(profile, 'devices'))}" + f"{html.escape(_metric_value(profile, 'clients'))}" + f"{html.escape(_config_summary(profile))}" + f"{html.escape(_telemetry_summary(profile))}" + f"{html.escape(_coverage_summary(profile))}" f"{report_link}" f"{inventory_link}" "" @@ -99,7 +186,7 @@ def build_site_index_html(manifest: Dict[str, object], reports_root: Path, gener

    Saved UniFi profile report outputs for this run.

    - + {''.join(rows)}
    SiteProfileCollectionReportPDFInventory
    SiteProfileCollectionReportDevicesClientsConfigTelemetryCoveragePDFInventory
    @@ -188,6 +275,7 @@ def _run_one(profile: UniFiSiteProfile, args: argparse.Namespace) -> Dict[str, o result["collectionStatus"] = "ok" if collect_status == 0 else "failed" if (backups_dir / "collection_summary.json").exists(): + result["summaryMetrics"] = _profile_summary_metrics(backups_dir) try: paths = report.build_report(str(backups_dir), str(reports_dir)) if args.pdf_only and paths.get("pdf"): From a068efd33d805502ca016e010861f227792d32aa Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 00:04:32 -0400 Subject: [PATCH 31/47] Share UniFi index styling --- tests/test_unifi_inventory.py | 2 ++ tests/test_unifi_run_sites.py | 2 ++ unifi/inventory.py | 19 +++---------------- unifi/run_sites.py | 18 ++---------------- unifi/style.py | 19 +++++++++++++++++++ 5 files changed, 28 insertions(+), 32 deletions(-) create mode 100644 unifi/style.py diff --git a/tests/test_unifi_inventory.py b/tests/test_unifi_inventory.py index 7c60885..f10ece9 100644 --- a/tests/test_unifi_inventory.py +++ b/tests/test_unifi_inventory.py @@ -21,6 +21,8 @@ def test_unifi_inventory_requires_pdf_and_writes_index(tmp_path: Path): assert {item["label"]: item["ok"] for item in manifest["items"]}["report_pdf"] is True assert {item["label"]: item["required"] for item in manifest["items"]}["report_html"] is False assert "TM UniFi Report Inventory" in index + assert "max-width: 1180px" in index + assert "margin: 16px 0" in index assert "report.pdf" in index assert "collection_summary.json" in index diff --git a/tests/test_unifi_run_sites.py b/tests/test_unifi_run_sites.py index 04f1145..2874922 100644 --- a/tests/test_unifi_run_sites.py +++ b/tests/test_unifi_run_sites.py @@ -37,6 +37,8 @@ def test_unifi_site_index_links_profile_reports(tmp_path: Path): html = build_site_index_html(manifest, reports_root, datetime(2026, 5, 5, tzinfo=timezone.utc)) assert "TM UniFi Site Reports" in html + assert "max-width: 1180px" in html + assert "margin: 16px 0" in html assert "First Campus" in html assert 'href="First_Campus/report.pdf"' in html assert 'href="First_Campus/index.html"' in html diff --git a/unifi/inventory.py b/unifi/inventory.py index e023b8e..e62c3bf 100644 --- a/unifi/inventory.py +++ b/unifi/inventory.py @@ -7,6 +7,8 @@ from pathlib import Path from typing import Dict, List +from .style import index_css + ROOT = Path(__file__).resolve().parents[1] @@ -100,22 +102,7 @@ def write_index(manifest: Dict[str, object], reports: Path) -> Path: TM UniFi Report Inventory diff --git a/unifi/run_sites.py b/unifi/run_sites.py index 4d59427..5793f26 100644 --- a/unifi/run_sites.py +++ b/unifi/run_sites.py @@ -12,6 +12,7 @@ from . import collect, report from .profiles import UniFiSiteProfile, discover_site_profiles, profile_by_key +from .style import index_css ROOT = Path(__file__).resolve().parents[1] @@ -155,22 +156,7 @@ def build_site_index_html(manifest: Dict[str, object], reports_root: Path, gener TM UniFi Site Reports diff --git a/unifi/style.py b/unifi/style.py new file mode 100644 index 0000000..3b76e35 --- /dev/null +++ b/unifi/style.py @@ -0,0 +1,19 @@ +def index_css(max_width: int = 1180) -> str: + return """ :root { color-scheme: light; } + body { font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif; margin: 32px; color: #172033; background: #f7f8fb; } + main { max-width: __MAX_WIDTH__px; margin: 0 auto; } + header { margin-bottom: 24px; } + h1 { margin: 0 0 6px; font-size: 28px; } + h2 { margin: 0 0 4px; font-size: 20px; } + p { margin: 0 0 14px; color: #526071; } + section { background: #fff; border: 1px solid #d9dee8; border-radius: 8px; padding: 18px; margin: 16px 0; overflow-x: auto; } + table { width: 100%; border-collapse: collapse; font-size: 14px; } + th, td { border-bottom: 1px solid #e7ebf2; padding: 8px 10px; text-align: left; vertical-align: top; } + th { color: #526071; font-size: 12px; text-transform: uppercase; letter-spacing: .02em; } + a { color: #185abc; text-decoration: none; } + a:hover { text-decoration: underline; } + .meta { display: flex; gap: 14px; flex-wrap: wrap; font-size: 14px; color: #526071; } + .status { display: inline-block; border-radius: 999px; padding: 2px 8px; font-size: 12px; font-weight: 700; } + .ok { background: #e7f5ec; color: #176a35; } + .warn, .optional { background: #fff7db; color: #755600; } + .bad, .missing { background: #fde8e8; color: #a62121; }""".replace("__MAX_WIDTH__", str(max_width)) From 81aa8ecaf57959b078bcb4db08d5cbc0c9823592 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 06:49:28 -0400 Subject: [PATCH 32/47] Make UniFi report assessment oriented --- tests/test_unifi_report.py | 8 + unifi/report.py | 663 +++++++++++++++++++++++++++++++++++-- 2 files changed, 645 insertions(+), 26 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 60f2e90..392fda9 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -121,6 +121,14 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "NATIVE" in html assert "Firewall Zones" in html assert "Recommended Follow-Up" in html + assert "Current State Assessment" in html + assert "Top Operational Risks" in html + assert "Recommended Priorities" in html + assert "Data Confidence Snapshot" in html + assert "Health at a Glance" in html + assert "How to Use This Report" in html + assert "Security Baseline" in html + assert "Port and radio diagnostics are low-confidence" in html assert "By Model" in html assert "Client Load by Uplink" in html assert "Firewall Policy Summary" in html diff --git a/unifi/report.py b/unifi/report.py index 2230e7e..d194f82 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -5,7 +5,7 @@ import os import shutil import subprocess -from datetime import datetime +from datetime import datetime, timezone from pathlib import Path from typing import Any, Dict, Iterable, List @@ -137,6 +137,33 @@ def _plural(count: int, singular: str, plural: str | None = None) -> str: return f"{count} {word}" +def _pct(part: int, total: int) -> str: + if total <= 0: + return "0%" + return f"{round((part / total) * 100)}%" + + +def _parse_datetime(value: Any) -> datetime | None: + if value in (None, ""): + return None + raw = str(value).strip() + if raw.endswith("Z"): + raw = raw[:-1] + "+00:00" + try: + parsed = datetime.fromisoformat(raw) + except ValueError: + return None + if parsed.tzinfo is None: + parsed = parsed.replace(tzinfo=timezone.utc) + return parsed + + +def _days_between(start: datetime | None, end: datetime) -> int | None: + if not start: + return None + return max(0, int((end - start).total_seconds() // 86400)) + + def _model_rows(devices: Iterable[Dict[str, Any]]) -> List[List[Any]]: counts: Dict[tuple[str, str], int] = {} for device in devices: @@ -301,6 +328,26 @@ def _summary_cards(cards: List[tuple[str, Any]]) -> str: ) + "
    " +def _health_cards(cards: List[tuple[str, str, str, str]]) -> str: + return "
    " + "".join( + ( + f"
    " + f"
    {html.escape(domain)}
    " + f"
    {html.escape(stat)}
    " + f"
    {html.escape(detail)}
    " + "
    " + ) + for status, domain, stat, detail in cards + ) + "
    " + + +def _html_list(items: List[str], *, ordered: bool = False) -> str: + if not items: + return "

    No findings generated.

    " + tag = "ol" if ordered else "ul" + return f"<{tag}>" + "".join(f"
  • {html.escape(item)}
  • " for item in items) + f"" + + def _read_site_file(source: Path, site_summary: Dict[str, Any], key: str) -> List[Dict[str, Any]]: rel = (site_summary.get("files") or {}).get(key) return _items(_load_json(source / rel, [])) if rel else [] @@ -396,6 +443,204 @@ def _auth_guidance(sm: Dict[str, Any], net: Dict[str, Any]) -> List[str]: return sorted(set(guidance)) +def _site_health_rows(site_summaries: List[Dict[str, Any]], all_devices: List[Dict[str, Any]]) -> List[List[Any]]: + if not site_summaries: + online = sum(1 for device in all_devices if _is_online(device)) + total = len(all_devices) + return [["Cloud / account", total, f"{online} / {total} ({_pct(online, total)})", total - online, _fmt_counts(_count_by(all_devices, _device_role))]] + + rows: List[List[Any]] = [] + for site in site_summaries: + counts = site.get("counts") if isinstance(site.get("counts"), dict) else {} + site_devices = int(counts.get("devices") or 0) + rows.append( + [ + site.get("name") or site.get("id") or "Site", + site_devices, + str(counts.get("clients") or 0), + str(counts.get("networks") or 0), + str(counts.get("wifi") or 0), + str(counts.get("firewall_policies") or 0), + ] + ) + return rows + + +def _infrastructure_rows(role_counts: Dict[str, int]) -> List[List[Any]]: + return [ + [ + "WAN / Edge", + "UniFi Gateway", + role_counts.get("Gateway", 0), + "Internet gateway, routing, firewall policy enforcement, VPN termination, and network services where enabled.", + ], + [ + "Distribution / Access", + "UniFi Switch", + role_counts.get("Switch", 0), + "Wired LAN switching, VLAN attachment, uplinks, and PoE edge connectivity. Port-level telemetry depends on API availability.", + ], + [ + "Wireless", + "UniFi Access Point", + role_counts.get("Access Point", 0), + "WiFi client access, SSID broadcast, roaming behavior, and RF capacity. Radio/channel telemetry depends on API availability.", + ], + ] + + +def _telemetry_gap_summary(telemetry_probes: List[Dict[str, Any]]) -> str: + if not telemetry_probes: + return "No detailed port/radio telemetry probes were captured." + available = sum(1 for probe in telemetry_probes if probe.get("available")) + total = len(telemetry_probes) + if available: + return f"{available} of {total} telemetry probe endpoint(s) returned data." + statuses = sorted({_probe_status_label(probe) for probe in telemetry_probes}) + return f"0 of {total} telemetry probe endpoint(s) returned data; observed statuses: {', '.join(statuses)}." + + +def _wifi_security_weak(wifi: Iterable[Dict[str, Any]]) -> List[str]: + weak: List[str] = [] + for wlan in wifi: + security = _wifi_security_label(wlan).upper() + if "OPEN" in security or "NONE" in security: + weak.append(f"{_first(wlan, ('name', 'ssid'), 'Unnamed SSID')} uses open/no-auth wireless security.") + elif "WPA2_PERSONAL" in security or security in {"WPA2", "PSK"}: + weak.append(f"{_first(wlan, ('name', 'ssid'), 'Unnamed SSID')} uses WPA2 Personal; consider WPA3, private pre-shared keys, or 802.1X where appropriate.") + return weak + + +def _legacy_ap_models(devices: Iterable[Dict[str, Any]]) -> List[str]: + legacy: List[str] = [] + for device in devices: + if _device_role(device) != "Access Point": + continue + model = _device_model(device) + model_l = model.lower() + if any(token in model_l for token in ("ac ", "ac-", "ac pro", "iw hd", "nano", "hd")) and not any(token in model_l for token in ("u6", "u7")): + legacy.append(f"{_device_name(device)} ({model})") + return legacy + + +def _client_age_buckets(clients: Iterable[Dict[str, Any]], now: datetime) -> Dict[str, int]: + buckets = {"0-7 days": 0, "8-30 days": 0, "31+ days": 0, "unknown": 0} + for client in clients: + seen = _parse_datetime(_first(client, ("connectedAt", "lastSeen"))) + days = _days_between(seen, now) + if days is None: + buckets["unknown"] += 1 + elif days <= 7: + buckets["0-7 days"] += 1 + elif days <= 30: + buckets["8-30 days"] += 1 + else: + buckets["31+ days"] += 1 + return buckets + + +def _top_risks( + *, + all_devices: List[Dict[str, Any]], + all_clients: List[Dict[str, Any]], + all_wifi: List[Dict[str, Any]], + all_firewall_policies: List[Dict[str, Any]], + all_dns_policies: List[Dict[str, Any]], + telemetry_probes: List[Dict[str, Any]], + errors: List[Dict[str, Any]], +) -> List[str]: + risks: List[str] = [] + offline = [_device_name(device) for device in all_devices if not _is_online(device)] + if offline: + verb = "reports" if len(offline) == 1 else "report" + risks.append(f"Device availability requires attention - {_plural(len(offline), 'device')} {verb} offline or inactive: {', '.join(offline[:6])}.") + + if telemetry_probes and not any(probe.get("available") for probe in telemetry_probes): + risks.append("Port and radio diagnostics are low-confidence - this controller/API path did not expose switch-port or AP-radio telemetry, so PoE draw, RF interference, channel utilization, and port speed cannot be validated from this backup alone.") + + risks.extend(_wifi_security_weak(all_wifi)[:3]) + + if all_firewall_policies: + logging_disabled = sum(1 for policy in all_firewall_policies if not _as_bool(policy.get("loggingEnabled"))) + if logging_disabled: + risks.append(f"Firewall visibility may be limited - {_plural(logging_disabled, 'captured firewall policy', 'captured firewall policies')} have logging disabled.") + else: + risks.append("No firewall policies were captured; do not treat this run as a complete security backup until policy endpoint access is validated.") + + if not all_dns_policies: + risks.append("No DNS policies were captured; confirm whether DNS filtering is intentionally unused or unavailable from this API surface.") + + if errors: + risks.append(f"Collection has {_plural(len(errors), 'hard endpoint error')} that should be resolved before using this as a final documentation package.") + + if not all_clients: + risks.append("Client visibility is absent, limiting capacity planning and migration sizing.") + return risks or ["No high-priority risks were generated from the captured UniFi data."] + + +def _recommended_priorities( + *, + all_devices: List[Dict[str, Any]], + all_wifi: List[Dict[str, Any]], + telemetry_probes: List[Dict[str, Any]], + all_firewall_policies: List[Dict[str, Any]], + all_dns_policies: List[Dict[str, Any]], +) -> List[str]: + priorities: List[str] = [] + if any(not _is_online(device) for device in all_devices): + priorities.append("Immediate (0-2 weeks): Validate offline UniFi devices against physical inventory, power, uplinks, and controller adoption state.") + if telemetry_probes and not any(probe.get("available") for probe in telemetry_probes): + priorities.append("Immediate (0-2 weeks): Decide whether deeper diagnostics require Site Manager metrics, UniFi system log/SIEM export, SSH/local controller export, or manual screenshots because the Integration API did not expose port/radio telemetry.") + if _wifi_security_weak(all_wifi): + priorities.append("Short-term (2-6 weeks): Review SSID security and migrate appropriate production WLANs toward WPA3, private PSK, or 802.1X instead of shared WPA2 Personal.") + if all_firewall_policies and any(not _as_bool(policy.get("loggingEnabled")) for policy in all_firewall_policies): + priorities.append("Short-term (2-6 weeks): Enable logging on security-relevant block/allow policies where event volume is acceptable.") + if not all_dns_policies: + priorities.append("Medium-term (6-12 weeks): Confirm DNS/security filtering requirements and document whether UniFi DNS policies, upstream filtering, or a separate security stack owns that control.") + priorities.append("Long-term (3-6 months): Build a refresh plan from active devices only, separating replacement candidates from offline/retired inventory.") + return priorities + + +def _data_confidence_rows( + *, + all_devices: List[Dict[str, Any]], + all_clients: List[Dict[str, Any]], + network_count: int, + firewall_policy_count: int, + telemetry_probes: List[Dict[str, Any]], + all_wans: List[Dict[str, Any]], +) -> List[List[Any]]: + telemetry_available = sum(1 for probe in telemetry_probes if probe.get("available")) + return [ + ["Inventory and device status", "High" if all_devices else "Low", f"{_plural(len(all_devices), 'device record')} captured with controller state."], + ["Client attachment detail", "High" if all_clients else "Low", f"{_plural(len(all_clients), 'client record')} captured with uplink mapping where present."], + ["VLAN/network definitions", "Medium" if network_count else "Low", f"{_plural(network_count, 'network/VLAN definition')} captured; subnet/DHCP detail depends on API fields exposed by this controller."], + ["Firewall policy backup", "High" if firewall_policy_count else "Low", f"{_plural(firewall_policy_count, 'policy', 'policies')} captured."], + ["WAN detail", "Low" if all_wans else "Not captured", f"{_plural(len(all_wans), 'WAN record')} captured; current endpoint only exposed labels in this run."], + ["Port and radio telemetry", "Low" if telemetry_available == 0 else "Medium", _telemetry_gap_summary(telemetry_probes)], + ] + + +def _security_baseline_rows( + *, + all_wifi: List[Dict[str, Any]], + all_firewall_policies: List[Dict[str, Any]], + all_dns_policies: List[Dict[str, Any]], + all_radius: List[Dict[str, Any]], + network_count: int, +) -> List[List[Any]]: + weak_wifi = _wifi_security_weak(all_wifi) + logging_enabled = sum(1 for policy in all_firewall_policies if _as_bool(policy.get("loggingEnabled"))) + return [ + ["Network segmentation", "Review" if network_count <= 2 else "Present", f"{_plural(network_count, 'network/VLAN definition')} captured."], + ["Wireless authentication", "Review" if weak_wifi else ("Present" if all_wifi else "Missing"), "; ".join(weak_wifi[:2]) if weak_wifi else f"{_plural(len(all_wifi), 'SSID')} captured."], + ["Firewall rules", "Present" if all_firewall_policies else "Missing", f"{_plural(len(all_firewall_policies), 'policy', 'policies')} captured."], + ["Firewall logging", "Review" if all_firewall_policies and logging_enabled < len(all_firewall_policies) else "Present", f"{logging_enabled} of {len(all_firewall_policies)} policies have logging enabled."], + ["DNS filtering policy", "Missing" if not all_dns_policies else "Present", f"{_plural(len(all_dns_policies), 'DNS policy', 'DNS policies')} captured."], + ["RADIUS / identity", "Present" if all_radius else "Not captured", f"{_plural(len(all_radius), 'RADIUS profile')} captured."], + ] + + def _executive_followups( *, all_devices: List[Dict[str, Any]], @@ -486,12 +731,36 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: wifi_count = sum(int(counts.get("wifi") or 0) for counts in all_site_counts) firewall_zone_count = sum(int(counts.get("firewall_zones") or 0) for counts in all_site_counts) firewall_policy_count = sum(int(counts.get("firewall_policies") or 0) for counts in all_site_counts) + site_payloads: Dict[str, List[Dict[str, Any]]] = { + key: [] + for key in ( + "networks", + "wifi", + "wans", + "firewall_zones", + "firewall_policies", + "acl_rules", + "traffic_lists", + "dns_policies", + "radius", + ) + } + for site in site_summaries: + for key in site_payloads: + site_payloads[key].extend(_read_site_file(source, site, key)) enabled_firewall_policy_count = 0 for site in site_summaries: enabled_firewall_policy_count += sum(1 for policy in _read_site_file(source, site, "firewall_policies") if _as_bool(policy.get("enabled"))) errors = list(sm.get("errors") or []) + list(net.get("errors") or []) unsupported = list(sm.get("unsupportedEndpoints") or []) + list(net.get("unsupportedEndpoints") or []) device_names = _build_device_name_map(all_devices) + collected_at = _parse_datetime(metadata.get("collectedAt")) or datetime.now(timezone.utc) + online_devices = sum(1 for device in all_devices if _is_online(device)) + offline_devices = len(all_devices) - online_devices + updatable_devices = sum(1 for device in all_devices if _as_bool(device.get("firmwareUpdatable"))) + telemetry_available = sum(1 for probe in telemetry_probes if probe.get("available")) + legacy_aps = _legacy_ap_models(all_devices) + client_age = _client_age_buckets(all_clients, collected_at) cards = [ ("Sites", len(site_summaries) or len(sm_sites)), ("Devices", len(all_devices)), @@ -505,8 +774,41 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: ] sections: List[str] = [] - sections.append("

    Executive Summary

    ") + sections.append("

    1. Executive Summary

    ") + current_state = ( + f"This UniFi assessment covers {len(site_summaries) or len(sm_sites) or 1} site(s) with " + f"{_plural(len(all_devices), 'captured UniFi device')} and {_plural(len(all_clients), 'client record')}. " + f"{online_devices} of {len(all_devices)} devices ({_pct(online_devices, len(all_devices))}) report online. " + "The report emphasizes actionable configuration and client visibility, while explicitly calling out telemetry gaps where the UniFi API did not expose switch-port or AP-radio metrics." + ) + sections.append(f"
    Current State Assessment
    {html.escape(current_state)}
    ") sections.append(_summary_cards(cards)) + top_risks = _top_risks( + all_devices=all_devices, + all_clients=all_clients, + all_wifi=site_payloads["wifi"], + all_firewall_policies=site_payloads["firewall_policies"], + all_dns_policies=site_payloads["dns_policies"], + telemetry_probes=telemetry_probes, + errors=errors, + ) + sections.append("

    Top Operational Risks

    ") + sections.append(_html_list(top_risks)) + sections.append("

    Recommended Priorities

    ") + sections.append( + _html_list( + _recommended_priorities( + all_devices=all_devices, + all_wifi=site_payloads["wifi"], + telemetry_probes=telemetry_probes, + all_firewall_policies=site_payloads["firewall_policies"], + all_dns_policies=site_payloads["dns_policies"], + ), + ordered=True, + ) + ) + sections.append("

    Infrastructure Inventory

    ") + sections.append(_table(["Layer", "Device Type", "Count", "Role in Network"], _infrastructure_rows(role_counts))) site_rows = [] for site in site_summaries: counts = site.get("counts") if isinstance(site.get("counts"), dict) else {} @@ -552,9 +854,50 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: wifi_count=wifi_count, ) sections.append("

    Recommended Follow-Up

    ") - sections.append("
      " + "".join(f"
    • {html.escape(item)}
    • " for item in followups) + "
    ") + sections.append(_html_list(followups)) + sections.append("

    Data Confidence Snapshot

    ") + sections.append( + _table( + ["Data Area", "Confidence", "Interpretation"], + _data_confidence_rows( + all_devices=all_devices, + all_clients=all_clients, + network_count=network_count, + firewall_policy_count=firewall_policy_count, + telemetry_probes=telemetry_probes, + all_wans=site_payloads["wans"], + ), + ) + ) + health_cards = [ + ("crit" if offline_devices else "good", "Availability", f"{_pct(online_devices, len(all_devices))} online", f"{online_devices} online / {offline_devices} offline"), + ("warn" if legacy_aps else "good", "Wireless", f"{role_counts.get('Access Point', 0)} APs", f"{len(legacy_aps)} legacy candidate(s)"), + ("warn" if telemetry_available == 0 and telemetry_probes else "good", "Port / RF Telemetry", f"{telemetry_available}/{len(telemetry_probes)} probes", "Port/radio detail availability"), + ("good" if site_payloads["firewall_policies"] else "warn", "Firewall Backup", f"{enabled_firewall_policy_count} enabled", f"{firewall_policy_count} captured policies"), + ("warn" if _wifi_security_weak(site_payloads["wifi"]) else "good", "WiFi Security", f"{wifi_count} SSID", "Authentication posture"), + ("info", "Clients", str(len(all_clients)), f"{client_age['31+ days']} stale over 30 days"), + ("warn" if updatable_devices else "good", "Firmware", f"{updatable_devices} updates", "Controller update flag"), + ("warn" if not site_payloads["dns_policies"] else "good", "DNS Policy", str(len(site_payloads["dns_policies"])), "No DNS policies captured" if not site_payloads["dns_policies"] else "Captured DNS controls"), + ] + sections.append("

    Health at a Glance

    ") + sections.append(_health_cards(health_cards)) + sections.append("
    ") + + sections.append("

    Guide. How to Use This Report

    ") + sections.append( + _table( + ["Reader", "Start Here", "Why"], + [ + ["Leadership / Finance", "Executive Summary and Recommended Priorities", "Shows the largest risks, follow-up actions, and where current data is strong or weak."], + ["IT Operations", "Device Inventory, Client Visibility, and Sites / VLANs", "Connects inventory, clients, VLANs, and operational symptoms without relying on unavailable telemetry."], + ["Security / Compliance", "Security Baseline and Firewall Policy Backup", "Documents firewall, DNS, RADIUS, SSID, and policy evidence captured by the UniFi API."], + ["Implementation Team", "Configuration Backup Completeness and Raw Backup Files", "Shows which JSON files can support disaster recovery or migration planning."], + ], + ) + ) + sections.append("
    ") - sections.append("

    Collection Coverage

    ") + sections.append("

    2. Collection Coverage

    ") rows = [ ["Requested mode", metadata.get("requestedMode", "")], ["Effective mode", metadata.get("effectiveMode", "")], @@ -577,7 +920,12 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("
      " + "".join(f"
    • {html.escape(item)}
    • " for item in auth_guidance) + "
    ") sections.append("
    ") - sections.append("

    Configuration Backup Completeness

    ") + sections.append("

    3. Network Overview

    ") + sections.append("

    This section gives the operations view of captured sites before the lower-level backup tables. Client and configuration counts are useful for migration planning even when detailed switch-port and AP-radio telemetry is unavailable.

    ") + sections.append(_table(["Site", "Devices", "Clients", "Networks", "WiFi", "Firewall Policies"], _site_health_rows(site_summaries, all_devices), "No site summary captured.")) + sections.append("
    ") + + sections.append("

    4. Configuration Backup Completeness

    ") if site_summaries: for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or site.get('id') or 'Site'))}

    ") @@ -592,7 +940,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("

    No local Network Application site backup detail captured.

    ") sections.append("
    ") - sections.append("

    Device Inventory

    ") + sections.append("

    5. Device Health & Inventory

    ") role_rows = [[k, v] for k, v in role_counts.items()] status_rows = [[k, v] for k, v in status_counts.items()] sections.append("

    By Role

    " + _table(["Role", "Count"], role_rows) + "
    ") @@ -630,7 +978,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(["Name", "Role", "Model", "Status", "Update", "IP", "MAC / ID", "Firmware"], device_rows)) sections.append("
    ") - sections.append("

    Sites, Networks, VLANs, and DHCP

    ") + sections.append("

    6. Sites, Networks, VLANs, and DHCP

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or site.get('id') or 'Site'))}

    ") networks = _read_site_file(source, site, "networks") @@ -653,7 +1001,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("

    No local Network Application site detail captured yet.

    ") sections.append("
    ") - sections.append("

    WiFi and Client Visibility

    ") + sections.append("

    7. WiFi and Client Visibility

    ") for site in site_summaries: wifi = _read_site_file(source, site, "wifi") rows = [] @@ -686,7 +1034,23 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(["Name", "Type", "IP", "MAC / ID", "Uplink Device", "Access", "Seen"], client_rows, "No client detail captured.")) sections.append("
    ") - sections.append("

    Firewall and Policy Backup

    ") + sections.append("

    8. Security Baseline

    ") + sections.append( + _table( + ["Control Area", "Status", "Evidence / Interpretation"], + _security_baseline_rows( + all_wifi=site_payloads["wifi"], + all_firewall_policies=site_payloads["firewall_policies"], + all_dns_policies=site_payloads["dns_policies"], + all_radius=site_payloads["radius"], + network_count=network_count, + ), + ) + ) + sections.append("

    Security baseline rows are assessment cues from captured configuration, not a substitute for a full policy review. Missing rows may mean the control is implemented outside UniFi or not exposed by this API path.

    ") + sections.append("
    ") + + sections.append("

    9. Firewall and Policy Backup

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") zones = _read_site_file(source, site, "firewall_zones") @@ -730,7 +1094,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(headers, rows, f"No {label.lower()} endpoint data captured.")) sections.append("
    ") - sections.append("

    Raw Backup Files

    ") + sections.append("

    10. Raw Backup Files

    ") files = sorted(str(p.relative_to(source)) for p in source.rglob("*.json")) sections.append(_table(["JSON backup"], [[f] for f in files], "No JSON backup files found.")) sections.append("
    ") @@ -752,37 +1116,284 @@ def _html_shell(title: str, body: str, metadata: Dict[str, Any]) -> str: {html.escape(title)}
    -

    TM UniFi Baseline

    -

    UniFi Network Report

    -

    Inventory, configuration backup coverage, client visibility, and migration planning inputs.

    -

    Collected: {html.escape(str(collected))}

    +
    +
    +
    Techmore
    +
    +

    UniFi Network Health & Backup Report

    +

    TM UniFi Baseline

    +

    Collected: {html.escape(str(collected))}

    +
    +
    +
    +
    ConfidentialRelease {release}
    +
    +
    +
    +
    +
    Table of Contents
    +
      +
    1. 1Executive Summary
    2. +
    3. GuideHow to Use This Report
    4. +
    5. 2Collection Coverage
    6. +
    7. 3Network Overview
    8. +
    9. 4Configuration Backup Completeness
    10. +
    11. 5Device Health & Inventory
    12. +
    13. 6Sites, Networks, VLANs, and DHCP
    14. +
    15. 7WiFi and Client Visibility
    16. +
    17. 8Security Baseline
    18. +
    19. 9Firewall and Policy Backup
    20. +
    21. 10Raw Backup Files
    22. +
    {body} From 3f56f5ad910923fe5157c3e9d16332dee084f8a7 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 06:53:08 -0400 Subject: [PATCH 33/47] Add UniFi companion report outputs --- tests/test_unifi_inventory.py | 10 +++ tests/test_unifi_report.py | 9 +++ tests/test_unifi_run_sites.py | 4 + unifi/inventory.py | 4 + unifi/report.py | 136 +++++++++++++++++++++++++++++----- unifi/run_sites.py | 14 +++- 6 files changed, 156 insertions(+), 21 deletions(-) diff --git a/tests/test_unifi_inventory.py b/tests/test_unifi_inventory.py index f10ece9..7805c6d 100644 --- a/tests/test_unifi_inventory.py +++ b/tests/test_unifi_inventory.py @@ -11,6 +11,8 @@ def test_unifi_inventory_requires_pdf_and_writes_index(tmp_path: Path): reports.mkdir() (backups / "collection_summary.json").write_text("{}", encoding="utf-8") (reports / "report.pdf").write_bytes(b"%PDF-1.4\n") + (reports / "report_exec_summary.pdf").write_bytes(b"%PDF-1.4\n") + (reports / "report_backup_settings.pdf").write_bytes(b"%PDF-1.4\n") assert inventory.main(["--backups-dir", str(backups), "--reports-dir", str(reports)]) == 0 @@ -19,11 +21,15 @@ def test_unifi_inventory_requires_pdf_and_writes_index(tmp_path: Path): assert manifest["ok"] is True assert {item["label"]: item["ok"] for item in manifest["items"]}["report_pdf"] is True + assert {item["label"]: item["ok"] for item in manifest["items"]}["report_exec_summary_pdf"] is True + assert {item["label"]: item["ok"] for item in manifest["items"]}["report_backup_settings_pdf"] is True assert {item["label"]: item["required"] for item in manifest["items"]}["report_html"] is False assert "TM UniFi Report Inventory" in index assert "max-width: 1180px" in index assert "margin: 16px 0" in index assert "report.pdf" in index + assert "report_exec_summary.pdf" in index + assert "report_backup_settings.pdf" in index assert "collection_summary.json" in index @@ -42,4 +48,8 @@ def test_unifi_inventory_fails_missing_pdf(tmp_path: Path): assert manifest["ok"] is False assert items["report_pdf"]["required"] is True assert items["report_pdf"]["ok"] is False + assert items["report_exec_summary_pdf"]["required"] is True + assert items["report_exec_summary_pdf"]["ok"] is False + assert items["report_backup_settings_pdf"]["required"] is True + assert items["report_backup_settings_pdf"]["ok"] is False assert (reports / "index.html").exists() diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 392fda9..f0b4d0f 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -112,6 +112,8 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): paths = build_report(str(source), str(output)) html = Path(paths["html"]).read_text(encoding="utf-8") + exec_html = Path(paths["exec_html"]).read_text(encoding="utf-8") + backup_html = Path(paths["backup_html"]).read_text(encoding="utf-8") assert "TM UniFi Baseline" in html assert "U7-Pro-1" in html assert "IW HD" in html @@ -148,6 +150,13 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "0 / 2 available" in html assert "captured empty" in html assert "not exposed (HTTP 404)" in html + assert "UniFi Executive Summary" in exec_html + assert "Top Operational Risks" in exec_html + assert "Firewall and Policy Backup" not in exec_html + assert "UniFi Backup Settings Report" in backup_html + assert "Configuration Backup Completeness" in backup_html + assert "Firewall and Policy Backup" in backup_html + assert "Connected Clients" not in backup_html def test_unifi_profiles_discovers_numbered_site_profiles(monkeypatch): diff --git a/tests/test_unifi_run_sites.py b/tests/test_unifi_run_sites.py index 2874922..dc897b6 100644 --- a/tests/test_unifi_run_sites.py +++ b/tests/test_unifi_run_sites.py @@ -9,6 +9,8 @@ def test_unifi_site_index_links_profile_reports(tmp_path: Path): site_dir = reports_root / "First_Campus" site_dir.mkdir(parents=True) (site_dir / "report.pdf").write_bytes(b"%PDF-1.4\n") + (site_dir / "report_exec_summary.pdf").write_bytes(b"%PDF-1.4\n") + (site_dir / "report_backup_settings.pdf").write_bytes(b"%PDF-1.4\n") (site_dir / "index.html").write_text("", encoding="utf-8") manifest = { "ok": True, @@ -41,6 +43,8 @@ def test_unifi_site_index_links_profile_reports(tmp_path: Path): assert "margin: 16px 0" in html assert "First Campus" in html assert 'href="First_Campus/report.pdf"' in html + assert 'href="First_Campus/report_exec_summary.pdf"' in html + assert 'href="First_Campus/report_backup_settings.pdf"' in html assert 'href="First_Campus/index.html"' in html assert "site_run_manifest.json" in html assert ">12<" in html diff --git a/unifi/inventory.py b/unifi/inventory.py index e62c3bf..9d89f63 100644 --- a/unifi/inventory.py +++ b/unifi/inventory.py @@ -37,7 +37,11 @@ def build_manifest(backups: Path, reports: Path) -> Dict[str, object]: checks = [ ("collection_summary", backups / "collection_summary.json", True), ("report_pdf", reports / "report.pdf", True), + ("report_exec_summary_pdf", reports / "report_exec_summary.pdf", True), + ("report_backup_settings_pdf", reports / "report_backup_settings.pdf", True), ("report_html", reports / "report.html", False), + ("report_exec_summary_html", reports / "report_exec_summary.html", False), + ("report_backup_settings_html", reports / "report_backup_settings.html", False), ] items: List[Dict[str, object]] = [] failed = False diff --git a/unifi/report.py b/unifi/report.py index d194f82..4be919c 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -3,6 +3,7 @@ import html import json import os +import re import shutil import subprocess from datetime import datetime, timezone @@ -348,6 +349,34 @@ def _html_list(items: List[str], *, ordered: bool = False) -> str: return f"<{tag}>" + "".join(f"
  • {html.escape(item)}
  • " for item in items) + f"" +def _section_title(block: str) -> str: + match = re.search(r"

    (.*?)

    ", block, re.DOTALL) + if not match: + return "" + return re.sub(r"<[^>]+>", "", match.group(1)).strip() + + +def _select_sections(body: str, wanted_prefixes: Iterable[str]) -> str: + prefixes = tuple(wanted_prefixes) + blocks = re.findall(r"
    .*?
    ", body, re.DOTALL) + selected = [block for block in blocks if _section_title(block).startswith(prefixes)] + return "\n".join(selected) + + +def _toc_items(section_body: str) -> List[tuple[str, str]]: + items: List[tuple[str, str]] = [] + for block in re.findall(r"
    .*?
    ", section_body, re.DOTALL): + title = _section_title(block) + if not title: + continue + if ". " in title: + number, label = title.split(". ", 1) + else: + number, label = "Guide", title.replace("Guide. ", "") + items.append((number, label)) + return items + + def _read_site_file(source: Path, site_summary: Dict[str, Any], key: str) -> List[Dict[str, Any]]: rel = (site_summary.get("files") or {}).get(key) return _items(_load_json(source / rel, [])) if rel else [] @@ -1099,17 +1128,93 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(["JSON backup"], [[f] for f in files], "No JSON backup files found.")) sections.append("
    ") - html_doc = _html_shell("TM UniFi Baseline", "\n".join(sections), metadata) + complete_body = "\n".join(sections) + exec_body = _select_sections(complete_body, ("1. Executive Summary", "Guide. How to Use This Report")) + backup_body = _select_sections( + complete_body, + ( + "2. Collection Coverage", + "4. Configuration Backup Completeness", + "6. Sites, Networks, VLANs, and DHCP", + "8. Security Baseline", + "9. Firewall and Policy Backup", + "10. Raw Backup Files", + ), + ) + + html_doc = _html_shell( + "TM UniFi Baseline", + complete_body, + metadata, + report_title="UniFi Network Health & Backup Report", + report_subtitle="Complete assessment, configuration evidence, and client visibility.", + ) + exec_doc = _html_shell( + "TM UniFi Executive Summary", + exec_body, + metadata, + report_title="UniFi Executive Summary", + report_subtitle="Leadership-ready risks, priorities, and data confidence.", + toc_items=_toc_items(exec_body), + ) + backup_doc = _html_shell( + "TM UniFi Backup Settings", + backup_body, + metadata, + report_title="UniFi Backup Settings Report", + report_subtitle="Configuration backup coverage, security policy evidence, and raw JSON index.", + toc_items=_toc_items(backup_body), + ) html_path = output / "report.html" pdf_path = output / "report.pdf" + exec_html_path = output / "report_exec_summary.html" + exec_pdf_path = output / "report_exec_summary.pdf" + backup_html_path = output / "report_backup_settings.html" + backup_pdf_path = output / "report_backup_settings.pdf" html_path.write_text(html_doc, encoding="utf-8") + exec_html_path.write_text(exec_doc, encoding="utf-8") + backup_html_path.write_text(backup_doc, encoding="utf-8") rendered = _render_pdf(html_path, pdf_path) - return {"html": str(html_path), "pdf": str(pdf_path) if rendered else ""} + exec_rendered = _render_pdf(exec_html_path, exec_pdf_path) + backup_rendered = _render_pdf(backup_html_path, backup_pdf_path) + return { + "html": str(html_path), + "pdf": str(pdf_path) if rendered else "", + "exec_html": str(exec_html_path), + "exec_pdf": str(exec_pdf_path) if exec_rendered else "", + "backup_html": str(backup_html_path), + "backup_pdf": str(backup_pdf_path) if backup_rendered else "", + } -def _html_shell(title: str, body: str, metadata: Dict[str, Any]) -> str: +def _html_shell( + title: str, + body: str, + metadata: Dict[str, Any], + *, + report_title: str = "UniFi Network Health & Backup Report", + report_subtitle: str = "TM UniFi Baseline", + toc_items: List[tuple[str, str]] | None = None, +) -> str: release = datetime.now().strftime("%Y_%m_%d") collected = metadata.get("collectedAt") or "not captured" + toc_items = toc_items or [ + ("1", "Executive Summary"), + ("Guide", "How to Use This Report"), + ("2", "Collection Coverage"), + ("3", "Network Overview"), + ("4", "Configuration Backup Completeness"), + ("5", "Device Health & Inventory"), + ("6", "Sites, Networks, VLANs, and DHCP"), + ("7", "WiFi and Client Visibility"), + ("8", "Security Baseline"), + ("9", "Firewall and Policy Backup"), + ("10", "Raw Backup Files"), + ] + toc_html = "".join( + f'
  • {html.escape(str(number))}{html.escape(str(label))}
  • ' + for number, label in toc_items + ) return f""" @@ -1369,8 +1474,8 @@ def _html_shell(title: str, body: str, metadata: Dict[str, Any]) -> str:
    Techmore
    -

    UniFi Network Health & Backup Report

    -

    TM UniFi Baseline

    +

    {html.escape(report_title)}

    +

    {html.escape(report_subtitle)}

    Collected: {html.escape(str(collected))}

    @@ -1382,17 +1487,7 @@ def _html_shell(title: str, body: str, metadata: Dict[str, Any]) -> str:
    Table of Contents
      -
    1. 1Executive Summary
    2. -
    3. GuideHow to Use This Report
    4. -
    5. 2Collection Coverage
    6. -
    7. 3Network Overview
    8. -
    9. 4Configuration Backup Completeness
    10. -
    11. 5Device Health & Inventory
    12. -
    13. 6Sites, Networks, VLANs, and DHCP
    14. -
    15. 7WiFi and Client Visibility
    16. -
    17. 8Security Baseline
    18. -
    19. 9Firewall and Policy Backup
    20. -
    21. 10Raw Backup Files
    22. + {toc_html}
    {body} @@ -1422,10 +1517,11 @@ def main(argv: List[str] | None = None) -> int: args = parser.parse_args(argv) paths = build_report(args.source_dir, args.output_dir) if args.pdf_only and paths.get("pdf"): - try: - Path(paths["html"]).unlink() - except FileNotFoundError: - pass + for key in ("html", "exec_html", "backup_html"): + try: + Path(paths[key]).unlink() + except (KeyError, FileNotFoundError): + pass print(json.dumps(paths, indent=2)) return 0 diff --git a/unifi/run_sites.py b/unifi/run_sites.py index 5793f26..5b97e99 100644 --- a/unifi/run_sites.py +++ b/unifi/run_sites.py @@ -121,11 +121,21 @@ def build_site_index_html(manifest: Dict[str, object], reports_root: Path, gener for profile in profiles: reports_dir = Path(str(profile.get("reportsDir") or "")) report_pdf = reports_dir / "report.pdf" + exec_pdf = reports_dir / "report_exec_summary.pdf" + backup_pdf = reports_dir / "report_backup_settings.pdf" profile_index = reports_dir / "index.html" if report_pdf.exists(): report_link = f'report.pdf' else: report_link = "report.pdf" + if exec_pdf.exists(): + exec_link = f'exec' + else: + exec_link = "exec" + if backup_pdf.exists(): + backup_link = f'backup' + else: + backup_link = "backup" if profile_index.exists(): inventory_link = f'index.html' else: @@ -142,6 +152,8 @@ def build_site_index_html(manifest: Dict[str, object], reports_root: Path, gener f"{html.escape(_telemetry_summary(profile))}" f"{html.escape(_coverage_summary(profile))}" f"{report_link}" + f"{exec_link}" + f"{backup_link}" f"{inventory_link}" "" ) @@ -172,7 +184,7 @@ def build_site_index_html(manifest: Dict[str, object], reports_root: Path, gener

    Saved UniFi profile report outputs for this run.

    - + {''.join(rows)}
    SiteProfileCollectionReportDevicesClientsConfigTelemetryCoveragePDFInventory
    SiteProfileCollectionReportDevicesClientsConfigTelemetryCoverageCompleteExecBackupInventory
    From a81a15e135cf533b5559f1ce255f365753b3cdfd Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 06:58:02 -0400 Subject: [PATCH 34/47] Add UniFi client analysis recommendations --- tests/test_unifi_report.py | 6 ++ unifi/report.py | 111 ++++++++++++++++++++++++++++++++++--- 2 files changed, 109 insertions(+), 8 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index f0b4d0f..0053d78 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -131,6 +131,11 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "How to Use This Report" in html assert "Security Baseline" in html assert "Port and radio diagnostics are low-confidence" in html + assert "Client Analysis" in html + assert "Client Overview Summary" in html + assert "Client Concentration by Uplink" in html + assert "Recommendations & Implementation Plan" in html + assert "Choose a deeper diagnostics source" in html assert "By Model" in html assert "Client Load by Uplink" in html assert "Firewall Policy Summary" in html @@ -152,6 +157,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "not exposed (HTTP 404)" in html assert "UniFi Executive Summary" in exec_html assert "Top Operational Risks" in exec_html + assert "Recommendations & Implementation Plan" in exec_html assert "Firewall and Policy Backup" not in exec_html assert "UniFi Backup Settings Report" in backup_html assert "Configuration Backup Completeness" in backup_html diff --git a/unifi/report.py b/unifi/report.py index 4be919c..67204a0 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -353,7 +353,7 @@ def _section_title(block: str) -> str: match = re.search(r"

    (.*?)

    ", block, re.DOTALL) if not match: return "" - return re.sub(r"<[^>]+>", "", match.group(1)).strip() + return html.unescape(re.sub(r"<[^>]+>", "", match.group(1))).strip() def _select_sections(body: str, wanted_prefixes: Iterable[str]) -> str: @@ -670,6 +670,73 @@ def _security_baseline_rows( ] +def _client_overview_rows(all_clients: List[Dict[str, Any]], now: datetime) -> List[List[Any]]: + type_counts = _count_by(all_clients, lambda client: _first(client, ("type", "connectionType"), "Unknown")) + access_counts = _count_by(all_clients, _access_label) + age_counts = _client_age_buckets(all_clients, now) + unknown_uplinks = sum(1 for client in all_clients if not _first(client, ("uplinkDeviceId", "uplinkDeviceMac", "uplinkDeviceName"))) + return [ + ["Connection mix", _fmt_counts(type_counts), "Use this for AP/switch migration sizing and wired-versus-wireless planning."], + ["Access policy mix", _fmt_counts(access_counts), "Confirm whether DEFAULT access for every client is intended or whether guest/IoT/staff policies should be separated."], + ["Client recency", _fmt_counts(age_counts), "Treat 31+ day clients as possible stale inventory before quoting replacements or capacity needs."], + ["Uplink mapping gaps", str(unknown_uplinks), "Clients without uplink mapping reduce confidence in AP/switch load conclusions."], + ] + + +def _client_uplink_analysis_rows(all_clients: List[Dict[str, Any]], device_names: Dict[str, str]) -> List[List[Any]]: + total = len(all_clients) + counts = _count_by(all_clients, lambda client: _client_uplink_label(client, device_names) or "Unknown") + rows: List[List[Any]] = [] + for uplink, count in counts.items(): + share = _pct(count, total) + try: + pct_value = int(share.rstrip("%")) + except ValueError: + pct_value = 0 + if pct_value >= 50: + note = "High concentration; validate coverage, capacity, and whether this AP/switch is a single point of client dependency." + elif pct_value >= 25: + note = "Moderate concentration; review during refresh or placement planning." + else: + note = "Normal concentration from captured client sample." + rows.append([uplink, count, share, note]) + return rows + + +def _implementation_plan_rows( + *, + all_devices: List[Dict[str, Any]], + all_wifi: List[Dict[str, Any]], + all_firewall_policies: List[Dict[str, Any]], + all_dns_policies: List[Dict[str, Any]], + telemetry_probes: List[Dict[str, Any]], + legacy_aps: List[str], + client_age: Dict[str, int], +) -> List[List[Any]]: + rows: List[List[Any]] = [] + offline = [_device_name(device) for device in all_devices if not _is_online(device)] + weak_wifi = _wifi_security_weak(all_wifi) + logging_disabled = sum(1 for policy in all_firewall_policies if not _as_bool(policy.get("loggingEnabled"))) + + if offline: + rows.append(["Immediate", "0-2 weeks", "Validate offline inventory", f"{', '.join(offline[:6])}", "IT operations"]) + if telemetry_probes and not any(probe.get("available") for probe in telemetry_probes): + rows.append(["Immediate", "0-2 weeks", "Choose a deeper diagnostics source", "Integration API did not expose port/radio telemetry", "Network engineering"]) + if weak_wifi: + rows.append(["Short-term", "2-6 weeks", "Review SSID security posture", "; ".join(weak_wifi[:2]), "Security / network engineering"]) + if logging_disabled: + rows.append(["Short-term", "2-6 weeks", "Enable useful firewall policy logging", f"{logging_disabled} captured policy records have logging disabled", "Security operations"]) + if not all_dns_policies: + rows.append(["Medium-term", "6-12 weeks", "Document DNS filtering ownership", "No UniFi DNS policies were captured", "Security / systems"]) + if legacy_aps: + rows.append(["Medium-term", "6-12 weeks", "Plan wireless refresh candidates", f"{len(legacy_aps)} legacy AP candidate(s): {', '.join(legacy_aps[:4])}", "IT leadership"]) + if client_age.get("31+ days", 0): + rows.append(["Medium-term", "6-12 weeks", "Clean stale client inventory", f"{client_age['31+ days']} client record(s) last seen more than 30 days before collection", "IT operations"]) + + rows.append(["Long-term", "3-6 months", "Build active-device migration scope", "Quote active/validated devices separately from offline or stale inventory", "IT leadership"]) + return rows + + def _executive_followups( *, all_devices: List[Dict[str, Any]], @@ -1063,6 +1130,14 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(["Name", "Type", "IP", "MAC / ID", "Uplink Device", "Access", "Seen"], client_rows, "No client detail captured.")) sections.append("
    ") + sections.append("

    7A. Client Analysis

    ") + sections.append("

    Client analysis uses the connected-client records exposed by the UniFi Network API. It should be treated as a useful planning sample, not a full historical accounting, unless longer-term logs are also exported.

    ") + sections.append("

    Client Overview Summary

    ") + sections.append(_table(["Area", "Observed", "Planning Use"], _client_overview_rows(all_clients, collected_at), "No client detail captured.")) + sections.append("

    Client Concentration by Uplink

    ") + sections.append(_table(["Uplink Device", "Clients", "Share", "Interpretation"], _client_uplink_analysis_rows(all_clients, device_names), "No client uplink mapping captured.")) + sections.append("
    ") + sections.append("

    8. Security Baseline

    ") sections.append( _table( @@ -1079,7 +1154,25 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("

    Security baseline rows are assessment cues from captured configuration, not a substitute for a full policy review. Missing rows may mean the control is implemented outside UniFi or not exposed by this API path.

    ") sections.append("
    ") - sections.append("

    9. Firewall and Policy Backup

    ") + sections.append("

    9. Recommendations & Implementation Plan

    ") + sections.append("

    The actions below are generated from captured UniFi inventory, client, WiFi, security, and API coverage evidence. Items avoid recommendations that require missing switch-port or AP-radio telemetry.

    ") + sections.append( + _table( + ["Priority", "Window", "Action", "Evidence", "Owner"], + _implementation_plan_rows( + all_devices=all_devices, + all_wifi=site_payloads["wifi"], + all_firewall_policies=site_payloads["firewall_policies"], + all_dns_policies=site_payloads["dns_policies"], + telemetry_probes=telemetry_probes, + legacy_aps=legacy_aps, + client_age=client_age, + ), + ) + ) + sections.append("
    ") + + sections.append("

    10. Firewall and Policy Backup

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") zones = _read_site_file(source, site, "firewall_zones") @@ -1123,13 +1216,13 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(headers, rows, f"No {label.lower()} endpoint data captured.")) sections.append("
    ") - sections.append("

    10. Raw Backup Files

    ") + sections.append("

    11. Raw Backup Files

    ") files = sorted(str(p.relative_to(source)) for p in source.rglob("*.json")) sections.append(_table(["JSON backup"], [[f] for f in files], "No JSON backup files found.")) sections.append("
    ") complete_body = "\n".join(sections) - exec_body = _select_sections(complete_body, ("1. Executive Summary", "Guide. How to Use This Report")) + exec_body = _select_sections(complete_body, ("1. Executive Summary", "Guide. How to Use This Report", "9. Recommendations & Implementation Plan")) backup_body = _select_sections( complete_body, ( @@ -1137,8 +1230,8 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: "4. Configuration Backup Completeness", "6. Sites, Networks, VLANs, and DHCP", "8. Security Baseline", - "9. Firewall and Policy Backup", - "10. Raw Backup Files", + "10. Firewall and Policy Backup", + "11. Raw Backup Files", ), ) @@ -1207,9 +1300,11 @@ def _html_shell( ("5", "Device Health & Inventory"), ("6", "Sites, Networks, VLANs, and DHCP"), ("7", "WiFi and Client Visibility"), + ("7A", "Client Analysis"), ("8", "Security Baseline"), - ("9", "Firewall and Policy Backup"), - ("10", "Raw Backup Files"), + ("9", "Recommendations & Implementation Plan"), + ("10", "Firewall and Policy Backup"), + ("11", "Raw Backup Files"), ] toc_html = "".join( f'
  • {html.escape(str(number))}{html.escape(str(label))}
  • ' From 8cb04892bbaaa2aee799841e9e6c02dab3a609a6 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 07:02:39 -0400 Subject: [PATCH 35/47] Add UniFi hardware refresh planning --- tests/test_unifi_report.py | 6 ++ unifi/report.py | 197 +++++++++++++++++++++++++++++++++++-- 2 files changed, 196 insertions(+), 7 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 0053d78..ec3ec96 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -136,6 +136,10 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Client Concentration by Uplink" in html assert "Recommendations & Implementation Plan" in html assert "Choose a deeper diagnostics source" in html + assert "Hardware Refresh & Budget Planning" in html + assert "Model-Level Refresh Planning" in html + assert "U7 Pro" in html + assert "Pro 48 PoE" in html assert "By Model" in html assert "Client Load by Uplink" in html assert "Firewall Policy Summary" in html @@ -158,10 +162,12 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "UniFi Executive Summary" in exec_html assert "Top Operational Risks" in exec_html assert "Recommendations & Implementation Plan" in exec_html + assert "Hardware Refresh & Budget Planning" in exec_html assert "Firewall and Policy Backup" not in exec_html assert "UniFi Backup Settings Report" in backup_html assert "Configuration Backup Completeness" in backup_html assert "Firewall and Policy Backup" in backup_html + assert "Hardware Refresh & Budget Planning" not in backup_html assert "Connected Clients" not in backup_html diff --git a/unifi/report.py b/unifi/report.py index 67204a0..4f30a93 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -12,6 +12,7 @@ ROOT = Path(__file__).resolve().parents[1] +PRICING_REFERENCE = ROOT / "reporting" / "reference" / "pricing_reference.json" SITE_ENDPOINT_ORDER = [ "devices", "clients", @@ -144,6 +145,12 @@ def _pct(part: int, total: int) -> str: return f"{round((part / total) * 100)}%" +def _money(value: int | float | None) -> str: + if not isinstance(value, (int, float)): + return "Pricing needed" + return f"${value:,.0f}" if float(value).is_integer() else f"${value:,.2f}" + + def _parse_datetime(value: Any) -> datetime | None: if value in (None, ""): return None @@ -737,6 +744,166 @@ def _implementation_plan_rows( return rows +def _pricing_payload() -> Dict[str, Any]: + payload = _load_json(PRICING_REFERENCE, {}) + return payload if isinstance(payload, dict) else {} + + +def _pricing_product(payload: Dict[str, Any], key: str) -> Dict[str, Any]: + products = payload.get("products") if isinstance(payload.get("products"), dict) else {} + product = products.get(key) + return product if isinstance(product, dict) else {} + + +def _product_name(product: Dict[str, Any], fallback: str) -> str: + return str(product.get("name") or product.get("sku") or fallback) + + +def _product_unit(product: Dict[str, Any]) -> int | float | None: + value = product.get("unit_cost") + return value if isinstance(value, (int, float)) else None + + +def _product_care(product: Dict[str, Any]) -> int | float | None: + value = product.get("ui_care_5yr_unit_cost") + return value if isinstance(value, (int, float)) else None + + +def _refresh_product_key(device: Dict[str, Any], legacy_aps: List[str]) -> tuple[str, str, str]: + role = _device_role(device) + model = _device_model(device) + model_l = model.lower() + name_model = f"{_device_name(device)} ({model})" + if role == "Access Point": + if name_model in legacy_aps: + note = "Legacy AP refresh candidate; validate mounting form factor and RF design before ordering." + return "U7-Pro", "Refresh candidate", note + return "", "Retain / monitor", "Current AP family did not match the legacy refresh heuristic." + if role == "Switch": + if "xg" in model_l and "48" in model_l: + return "USW-Pro-XG-48-PoE", "Refresh candidate", "High-speed 48-port switch reference; validate PoE, optics, and uplink design." + if "xg" in model_l and "24" in model_l: + return "USW-Pro-XG-24-PoE", "Refresh candidate", "High-speed 24-port switch reference; validate PoE, optics, and uplink design." + if "48" in model_l: + return "USW-Pro-48-POE", "Refresh candidate", "48-port access switch reference; validate PoE budget and uplinks." + if "24" in model_l: + return "USW-Pro-24-POE", "Refresh candidate", "24-port access switch reference; validate PoE budget and uplinks." + return "", "Pricing needed", "Small switch or special form factor; add exact replacement SKU to pricing reference before quoting." + if role == "Gateway": + if any(token in model_l for token in ("ucg", "udm", "cloud gateway")): + return "", "Retain / monitor", "Current gateway family appears active; replace only if capacity, HA, or security requirements change." + return "UDM-Pro-Max", "Refresh candidate", "Gateway planning reference; validate firewall, VPN, IDS/IPS, logging, and HA requirements." + return "", "Review manually", "Device role did not map to a maintained replacement class." + + +def _hardware_refresh_rows( + devices: List[Dict[str, Any]], + legacy_aps: List[str], + pricing: Dict[str, Any], +) -> tuple[List[List[Any]], Dict[str, Any]]: + grouped: Dict[tuple[str, str, str, str, str], Dict[str, Any]] = {} + for device in devices: + role = _device_role(device) + model = _device_model(device) + online = _is_online(device) + product_key, action, note = _refresh_product_key(device, legacy_aps) + if not online: + action = "Excluded pending validation" + note = "Offline/inactive in controller; validate physical inventory before quoting replacement." + product_key = "" + product = _pricing_product(pricing, product_key) + product_label = _product_name(product, product_key) if product_key else "Pricing needed" + key = (model, role, product_key, action, note) + row = grouped.setdefault( + key, + { + "model": model, + "role": role, + "inventory": 0, + "active": 0, + "excluded": 0, + "product": product_label, + "unit": _product_unit(product), + "care": _product_care(product), + "action": action, + "note": note, + }, + ) + row["inventory"] += 1 + if online: + row["active"] += 1 + else: + row["excluded"] += 1 + + rows: List[List[Any]] = [] + totals = {"hardware": 0.0, "care": 0.0, "priced_active": 0, "unpriced_active": 0, "excluded": 0} + for row in sorted(grouped.values(), key=lambda item: (str(item["role"]), str(item["model"]))): + active = int(row["active"]) + excluded = int(row["excluded"]) + unit = row["unit"] + care = row["care"] + hardware_total = unit * active if isinstance(unit, (int, float)) and active else None + care_total = care * active if isinstance(care, (int, float)) and active else None + if hardware_total is not None: + totals["hardware"] += hardware_total + totals["priced_active"] += active + elif active: + totals["unpriced_active"] += active + if care_total is not None: + totals["care"] += care_total + totals["excluded"] += excluded + rows.append( + [ + row["model"], + row["role"], + row["inventory"], + active, + excluded, + row["product"], + row["action"], + _money(unit), + _money(care), + _money(hardware_total), + row["note"], + ] + ) + return rows, totals + + +def _hardware_summary_rows(pricing: Dict[str, Any], totals: Dict[str, Any]) -> List[List[Any]]: + meta = pricing.get("meta") if isinstance(pricing.get("meta"), dict) else {} + notes = meta.get("notes") if isinstance(meta.get("notes"), list) else [] + return [ + ["Reference catalog", str(meta.get("name") or "pricing_reference.json"), f"Updated {meta.get('updated') or 'unknown'}; currency {meta.get('currency') or 'USD'}."], + ["Priced active devices", str(int(totals.get("priced_active") or 0)), "Only online devices with maintained product mappings are included in the subtotal."], + ["Unpriced active devices", str(int(totals.get("unpriced_active") or 0)), "These need exact SKU mapping before client-facing budget use."], + ["Excluded devices", str(int(totals.get("excluded") or 0)), "Offline/inactive devices are excluded until field-validated."], + ["Hardware subtotal", _money(totals.get("hardware")), "Public-reference hardware subtotal for priced active mapped devices."], + ["Optional UI Care 5-year", _money(totals.get("care")), "Shown separately from hardware so support decisions stay explicit."], + ["Pricing caution", "Planning only", str(notes[0]) if notes else "Validate all pricing before procurement decisions."], + ] + + +def _catalog_reference_rows(pricing: Dict[str, Any]) -> List[List[Any]]: + products = pricing.get("products") if isinstance(pricing.get("products"), dict) else {} + wanted = {"access_point", "switch", "gateway", "aggregation"} + rows: List[List[Any]] = [] + for key, product in sorted(products.items(), key=lambda item: (str((item[1] or {}).get("category")), str((item[1] or {}).get("name")))): + if not isinstance(product, dict) or product.get("category") not in wanted: + continue + rows.append( + [ + product.get("category", ""), + product.get("name") or key, + product.get("sku") or key, + _money(_product_unit(product)), + _money(_product_care(product)), + product.get("description", ""), + ] + ) + return rows + + def _executive_followups( *, all_devices: List[Dict[str, Any]], @@ -857,6 +1024,8 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: telemetry_available = sum(1 for probe in telemetry_probes if probe.get("available")) legacy_aps = _legacy_ap_models(all_devices) client_age = _client_age_buckets(all_clients, collected_at) + pricing = _pricing_payload() + hardware_rows, hardware_totals = _hardware_refresh_rows(all_devices, legacy_aps, pricing) cards = [ ("Sites", len(site_summaries) or len(sm_sites)), ("Devices", len(all_devices)), @@ -1172,7 +1341,20 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: ) sections.append("
    ") - sections.append("

    10. Firewall and Policy Backup

    ") + sections.append("

    10. Hardware Refresh & Budget Planning

    ") + sections.append("

    This section uses the maintained pricing reference to create a planning-only hardware refresh view. It prices only online devices that map cleanly to a maintained product reference; offline devices and special form factors stay excluded or marked pricing-needed until field validation.

    ") + sections.append("

    Planning Summary

    ") + sections.append(_table(["Area", "Value", "Interpretation"], _hardware_summary_rows(pricing, hardware_totals))) + sections.append("

    Model-Level Refresh Planning

    ") + sections.append(_table(["Current Model", "Role", "Inventory", "Active", "Excluded", "Reference Product", "Action", "Unit", "UI Care / Unit", "Hardware Total", "Notes"], hardware_rows, "No device inventory was available for hardware planning.")) + catalog_rows = _catalog_reference_rows(pricing) + if catalog_rows: + sections.append("

    Maintained UniFi Reference Catalog

    ") + sections.append(_table(["Category", "Product", "SKU", "Unit", "UI Care / Unit", "Planning Notes"], catalog_rows)) + sections.append("

    Not included: tax, freight, optics/transceivers, cabling, installation labor, configuration labor, licensing/subscription changes, contingency, or reseller/E-rate discounts.

    ") + sections.append("
    ") + + sections.append("

    11. Firewall and Policy Backup

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") zones = _read_site_file(source, site, "firewall_zones") @@ -1216,13 +1398,13 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(headers, rows, f"No {label.lower()} endpoint data captured.")) sections.append("
    ") - sections.append("

    11. Raw Backup Files

    ") + sections.append("

    12. Raw Backup Files

    ") files = sorted(str(p.relative_to(source)) for p in source.rglob("*.json")) sections.append(_table(["JSON backup"], [[f] for f in files], "No JSON backup files found.")) sections.append("
    ") complete_body = "\n".join(sections) - exec_body = _select_sections(complete_body, ("1. Executive Summary", "Guide. How to Use This Report", "9. Recommendations & Implementation Plan")) + exec_body = _select_sections(complete_body, ("1. Executive Summary", "Guide. How to Use This Report", "9. Recommendations & Implementation Plan", "10. Hardware Refresh & Budget Planning")) backup_body = _select_sections( complete_body, ( @@ -1230,8 +1412,8 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: "4. Configuration Backup Completeness", "6. Sites, Networks, VLANs, and DHCP", "8. Security Baseline", - "10. Firewall and Policy Backup", - "11. Raw Backup Files", + "11. Firewall and Policy Backup", + "12. Raw Backup Files", ), ) @@ -1303,8 +1485,9 @@ def _html_shell( ("7A", "Client Analysis"), ("8", "Security Baseline"), ("9", "Recommendations & Implementation Plan"), - ("10", "Firewall and Policy Backup"), - ("11", "Raw Backup Files"), + ("10", "Hardware Refresh & Budget Planning"), + ("11", "Firewall and Policy Backup"), + ("12", "Raw Backup Files"), ] toc_html = "".join( f'
  • {html.escape(str(number))}{html.escape(str(label))}
  • ' From 23898f921189e3b4ff4ddb139201c040068a3364 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 07:07:30 -0400 Subject: [PATCH 36/47] Expand UniFi firewall policy reporting --- tests/test_unifi_report.py | 32 ++++++++++- unifi/report.py | 108 +++++++++++++++++++++++++++++++++++-- 2 files changed, 136 insertions(+), 4 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index ec3ec96..5f98c16 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -95,7 +95,33 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): encoding="utf-8", ) (site_dir / "firewall_zones.json").write_text(json.dumps([{"name": "Internal", "id": "zone-1"}]), encoding="utf-8") - (site_dir / "firewall_policies.json").write_text(json.dumps([{"name": "Allow Staff", "enabled": True, "action": {"type": "ALLOW"}}]), encoding="utf-8") + (site_dir / "firewall_policies.json").write_text( + json.dumps( + [ + {"name": "Allow Staff", "enabled": True, "action": {"type": "ALLOW"}}, + { + "name": "Allow mDNS", + "enabled": True, + "action": {"type": "ALLOW", "allowReturnTraffic": True}, + "source": {"zoneId": "zone-1", "trafficFilter": {"portFilter": {"items": [{"type": "PORT_NUMBER", "value": 5353}], "type": "PORTS"}}}, + "destination": {"trafficFilter": {"ipAddressFilter": {"items": [{"type": "SUBNET", "value": "224.0.0.0/24"}], "type": "IP_ADDRESSES"}}}, + "ipProtocolScope": {"ipVersion": "IPV4_AND_IPV6", "protocolFilter": {"protocol": {"name": "UDP"}, "type": "NAMED_PROTOCOL"}}, + "loggingEnabled": False, + "metadata": {"origin": "USER_DEFINED"}, + }, + { + "name": "Allow All Traffic", + "enabled": True, + "action": {"type": "ALLOW", "allowReturnTraffic": True}, + "source": {"zoneId": "zone-1"}, + "destination": {"zoneId": "zone-1"}, + "loggingEnabled": False, + "metadata": {"origin": "SYSTEM_DEFINED"}, + }, + ] + ), + encoding="utf-8", + ) (site_dir / "dns_policies.json").write_text(json.dumps([]), encoding="utf-8") (site_dir / "vpn_tunnels.json").write_text(json.dumps([]), encoding="utf-8") (site_dir / "telemetry_probe.json").write_text( @@ -144,6 +170,10 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Client Load by Uplink" in html assert "Firewall Policy Summary" in html assert "Internal" in html + assert "Port: 5353" in html + assert "IP: 224.0.0.0/24" in html + assert "IPv4 and IPv6; UDP" in html + assert "Broad allow policies" in html assert "U7-Pro-1 (U7-Pro)" in html assert "Firmware" in html assert "Network Application version" in html diff --git a/unifi/report.py b/unifi/report.py index 4f30a93..74caee2 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -410,6 +410,103 @@ def _zone_label(value: Any, zone_names: Dict[str, str]) -> str: return str(value or "") +def _filter_values(filter_payload: Any) -> str: + if not isinstance(filter_payload, dict): + return "" + values: List[str] = [] + for item in filter_payload.get("items") or []: + if not isinstance(item, dict): + continue + label = str(item.get("value") or item.get("name") or item.get("id") or "") + item_type = str(item.get("type") or "") + if label and item_type and item_type not in {"IP_ADDRESS", "SUBNET", "PORT_NUMBER", "PORT_RANGE"}: + item_type = item_type.replace("_", " ").title() + values.append(f"{item_type} {label}") + elif label: + values.append(label) + if not values: + values.append(str(filter_payload.get("type") or "")) + values = [value for value in values if value] + if not values: + return "" + prefix = "not " if filter_payload.get("matchOpposite") else "" + return prefix + ", ".join(values) + + +def _traffic_filter_label(value: Any, zone_names: Dict[str, str]) -> str: + if not isinstance(value, dict): + return str(value or "") + parts: List[str] = [] + zone_id = str(value.get("zoneId") or "") + if zone_id: + parts.append(zone_names.get(zone_id, zone_id)) + traffic = value.get("trafficFilter") + if isinstance(traffic, dict): + details = [] + ip_detail = _filter_values(traffic.get("ipAddressFilter")) + port_detail = _filter_values(traffic.get("portFilter")) + if ip_detail: + details.append(f"IP: {ip_detail}") + if port_detail: + details.append(f"Port: {port_detail}") + traffic_type = str(traffic.get("type") or "") + if traffic_type and not details: + details.append(traffic_type.replace("_", " ").title()) + if details: + parts.append("; ".join(details)) + return " | ".join(part for part in parts if part) or _zone_label(value, zone_names) + + +def _ip_protocol_label(policy: Dict[str, Any]) -> str: + scope = policy.get("ipProtocolScope") + if not isinstance(scope, dict): + return "" + parts = [] + ip_version = str(scope.get("ipVersion") or "") + if ip_version: + version_labels = { + "IPV4": "IPv4", + "IPV6": "IPv6", + "IPV4_AND_IPV6": "IPv4 and IPv6", + } + parts.append(version_labels.get(ip_version.upper(), ip_version.replace("_", " "))) + protocol_filter = scope.get("protocolFilter") + if isinstance(protocol_filter, dict): + protocol = protocol_filter.get("protocol") + if isinstance(protocol, dict): + label = str(protocol.get("name") or protocol.get("protocol") or "") + else: + label = str(protocol or "") + if label: + if protocol_filter.get("matchOpposite"): + label = f"not {label}" + parts.append(label) + return "; ".join(parts) + + +def _connection_state_label(policy: Dict[str, Any]) -> str: + states = policy.get("connectionStateFilter") + if isinstance(states, list): + return ", ".join(str(state) for state in states if state) + return str(states or "") + + +def _policy_origin_label(policy: Dict[str, Any]) -> str: + metadata_payload = policy.get("metadata") + if isinstance(metadata_payload, dict): + return str(metadata_payload.get("origin") or "") + return "" + + +def _broad_allow_policy_count(policies: Iterable[Dict[str, Any]]) -> int: + count = 0 + for policy in policies: + name = _first(policy, ("name", "description", "id")).strip().lower() + if _as_bool(policy.get("enabled")) and _action_label(policy).upper().startswith("ALLOW") and "allow all" in name: + count += 1 + return count + + def _wifi_network_label(wlan: Dict[str, Any]) -> str: network = wlan.get("network") if isinstance(network, dict): @@ -667,11 +764,13 @@ def _security_baseline_rows( ) -> List[List[Any]]: weak_wifi = _wifi_security_weak(all_wifi) logging_enabled = sum(1 for policy in all_firewall_policies if _as_bool(policy.get("loggingEnabled"))) + broad_allow = _broad_allow_policy_count(all_firewall_policies) return [ ["Network segmentation", "Review" if network_count <= 2 else "Present", f"{_plural(network_count, 'network/VLAN definition')} captured."], ["Wireless authentication", "Review" if weak_wifi else ("Present" if all_wifi else "Missing"), "; ".join(weak_wifi[:2]) if weak_wifi else f"{_plural(len(all_wifi), 'SSID')} captured."], ["Firewall rules", "Present" if all_firewall_policies else "Missing", f"{_plural(len(all_firewall_policies), 'policy', 'policies')} captured."], ["Firewall logging", "Review" if all_firewall_policies and logging_enabled < len(all_firewall_policies) else "Present", f"{logging_enabled} of {len(all_firewall_policies)} policies have logging enabled."], + ["Broad allow policies", "Review" if broad_allow else "Not detected", f"{_plural(broad_allow, 'enabled broad allow policy', 'enabled broad allow policies')} detected by policy name/action."], ["DNS filtering policy", "Missing" if not all_dns_policies else "Present", f"{_plural(len(all_dns_policies), 'DNS policy', 'DNS policies')} captured."], ["RADIUS / identity", "Present" if all_radius else "Not captured", f"{_plural(len(all_radius), 'RADIUS profile')} captured."], ] @@ -1384,13 +1483,16 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: _first(item, ("name", "description", "id")), _first(item, ("enabled",)), _action_label(item), - _zone_label(item.get("source"), zone_names), - _zone_label(item.get("destination"), zone_names), + _traffic_filter_label(item.get("source"), zone_names), + _traffic_filter_label(item.get("destination"), zone_names), + _ip_protocol_label(item), + _connection_state_label(item), _first(item, ("loggingEnabled",)), + _policy_origin_label(item), ] for item in data[:120] ] - headers = ["Order", "Name", "Enabled", "Action", "Source", "Destination", "Logging"] + headers = ["Order", "Name", "Enabled", "Action", "Source", "Destination", "Protocol", "State", "Logging", "Origin"] else: rows = [[_first(item, ("name", "description", "id")), _first(item, ("enabled", "action", "type")), _first(item, ("id", "_id"))] for item in data[:100]] headers = ["Name", "State / Action", "ID"] From 1fef5fe57807b7a31dce4532c05c2a16d9d43b75 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 07:10:46 -0400 Subject: [PATCH 37/47] Render UniFi network service backups --- tests/test_unifi_report.py | 18 ++++ unifi/report.py | 163 ++++++++++++++++++++++++++++++++++++- 2 files changed, 178 insertions(+), 3 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 5f98c16..0b096dd 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -42,6 +42,10 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): "firewall_zones": "sites/Main/firewall_zones.json", "firewall_policies": "sites/Main/firewall_policies.json", "dns_policies": "sites/Main/dns_policies.json", + "wans": "sites/Main/wans.json", + "vpn_servers": "sites/Main/vpn_servers.json", + "radius": "sites/Main/radius.json", + "hotspot_vouchers": "sites/Main/hotspot_vouchers.json", "vpn_tunnels": "sites/Main/vpn_tunnels.json", "telemetry_probe": "sites/Main/telemetry_probe.json", }, @@ -53,6 +57,10 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): "firewall_zones": 1, "firewall_policies": 1, "dns_policies": 0, + "wans": 1, + "vpn_servers": 1, + "radius": 1, + "hotspot_vouchers": 1, "vpn_tunnels": 0, "telemetry_probe_available": 0, "telemetry_probe_total": 2, @@ -123,6 +131,10 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): encoding="utf-8", ) (site_dir / "dns_policies.json").write_text(json.dumps([]), encoding="utf-8") + (site_dir / "wans.json").write_text(json.dumps([{"name": "Internet 1", "id": "wan-1", "addressingType": "DHCP", "dnsServers": ["1.1.1.1"]}]), encoding="utf-8") + (site_dir / "vpn_servers.json").write_text(json.dumps([{"name": "Corp VPN", "enabled": True, "type": "wireguard", "metadata": {"origin": "USER_DEFINED"}}]), encoding="utf-8") + (site_dir / "radius.json").write_text(json.dumps([{"name": "Default RADIUS", "host": "10.10.0.5", "authPort": 1812, "metadata": {"origin": "USER_DEFINED"}}]), encoding="utf-8") + (site_dir / "hotspot_vouchers.json").write_text(json.dumps([{"code": "guest-123", "status": "active", "durationMinutes": 60}]), encoding="utf-8") (site_dir / "vpn_tunnels.json").write_text(json.dumps([]), encoding="utf-8") (site_dir / "telemetry_probe.json").write_text( json.dumps( @@ -174,6 +186,11 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "IP: 224.0.0.0/24" in html assert "IPv4 and IPv6; UDP" in html assert "Broad allow policies" in html + assert "Network Services Backup" in html + assert "Internet 1" in html + assert "Corp VPN" in html + assert "Default RADIUS" in html + assert "guest-123" in html assert "U7-Pro-1 (U7-Pro)" in html assert "Firmware" in html assert "Network Application version" in html @@ -197,6 +214,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "UniFi Backup Settings Report" in backup_html assert "Configuration Backup Completeness" in backup_html assert "Firewall and Policy Backup" in backup_html + assert "Network Services Backup" in backup_html assert "Hardware Refresh & Budget Planning" not in backup_html assert "Connected Clients" not in backup_html diff --git a/unifi/report.py b/unifi/report.py index 74caee2..240d153 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -507,6 +507,139 @@ def _broad_allow_policy_count(policies: Iterable[Dict[str, Any]]) -> int: return count +def _item_origin_label(item: Dict[str, Any]) -> str: + metadata_payload = item.get("metadata") + if isinstance(metadata_payload, dict): + return str(metadata_payload.get("origin") or "") + return _first(item, ("origin", "source")) + + +def _compact_value(value: Any) -> str: + if isinstance(value, list): + return ", ".join(_compact_value(item) for item in value if item not in (None, "")) + if isinstance(value, dict): + return ", ".join( + str(value.get(key)) + for key in ("name", "value", "id", "type") + if value.get(key) not in (None, "") + ) + return str(value) if value not in (None, "") else "" + + +def _service_endpoint_state(items: List[Dict[str, Any]]) -> str: + if items: + return f"{_plural(len(items), 'record')} captured" + return "captured empty" + + +def _wan_rows(wans: List[Dict[str, Any]]) -> List[List[Any]]: + rows: List[List[Any]] = [] + for wan in wans[:100]: + ip_gateway = " / ".join( + value + for value in ( + _first(wan, ("ipAddress", "ip", "address")), + _first(wan, ("gateway", "gatewayIp", "gatewayAddress")), + ) + if value + ) + rows.append( + [ + _first(wan, ("name", "displayName", "id")), + _first(wan, ("enabled", "state", "status"), "captured"), + _first(wan, ("type", "wanType", "purpose")), + _first(wan, ("addressingType", "connectionType", "ipv4ConnectionType", "mode")), + ip_gateway, + _compact_value(wan.get("dnsServers") or wan.get("dns") or wan.get("nameservers")), + _first(wan, ("id", "_id")), + ] + ) + return rows + + +def _vpn_rows(items: List[Dict[str, Any]]) -> List[List[Any]]: + rows: List[List[Any]] = [] + for item in items[:100]: + remote = _compact_value(item.get("remote") or item.get("peer") or item.get("peers") or item.get("remoteAddress") or item.get("remoteNetwork")) + rows.append( + [ + _first(item, ("name", "displayName", "id")), + _first(item, ("enabled", "state", "status"), "captured"), + _first(item, ("type", "vpnType", "protocol")), + remote, + _compact_value(item.get("network") or item.get("networks") or item.get("routes")), + _item_origin_label(item), + _first(item, ("id", "_id")), + ] + ) + return rows + + +def _radius_rows(items: List[Dict[str, Any]]) -> List[List[Any]]: + rows: List[List[Any]] = [] + for item in items[:100]: + rows.append( + [ + _first(item, ("name", "displayName", "id")), + _first(item, ("enabled", "state", "status"), "captured"), + _first(item, ("host", "server", "serverAddress", "ipAddress")), + _first(item, ("authPort", "authenticationPort", "port")), + _first(item, ("accountingPort", "acctPort")), + _item_origin_label(item), + _first(item, ("id", "_id")), + ] + ) + return rows + + +def _hotspot_rows(items: List[Dict[str, Any]]) -> List[List[Any]]: + rows: List[List[Any]] = [] + for item in items[:100]: + rows.append( + [ + _first(item, ("name", "code", "id")), + _first(item, ("enabled", "state", "status"), "captured"), + _first(item, ("uses", "used", "usageCount")), + _first(item, ("duration", "durationMinutes", "validity")), + _first(item, ("expiresAt", "expiration", "validUntil")), + _item_origin_label(item), + _first(item, ("id", "_id")), + ] + ) + return rows + + +def _dns_policy_rows(items: List[Dict[str, Any]]) -> List[List[Any]]: + rows: List[List[Any]] = [] + for item in items[:100]: + rows.append( + [ + _first(item, ("name", "displayName", "id")), + _first(item, ("enabled", "state", "status"), "captured"), + _first(item, ("action", "type", "policyType", "mode")), + _compact_value(item.get("network") or item.get("networks") or item.get("networkIds")), + _compact_value(item.get("categories") or item.get("domains") or item.get("rules")), + _item_origin_label(item), + _first(item, ("id", "_id")), + ] + ) + return rows + + +def _network_service_summary_rows(site: Dict[str, Any], source: Path) -> List[List[Any]]: + rows: List[List[Any]] = [] + for key, label in ( + ("wans", "WAN interfaces"), + ("vpn_servers", "VPN servers"), + ("vpn_tunnels", "VPN tunnels"), + ("radius", "RADIUS profiles"), + ("hotspot_vouchers", "Hotspot vouchers"), + ("dns_policies", "DNS policies"), + ): + rows.append([label, _service_endpoint_state(_read_site_file(source, site, key))]) + return rows + + def _wifi_network_label(wlan: Dict[str, Any]) -> str: network = wlan.get("network") if isinstance(network, dict): @@ -1500,7 +1633,29 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(headers, rows, f"No {label.lower()} endpoint data captured.")) sections.append("
    ") - sections.append("

    12. Raw Backup Files

    ") + sections.append("

    12. Network Services Backup

    ") + sections.append("

    This section renders service-oriented configuration that is already saved in the raw UniFi JSON backup. Empty tables are still useful because they document that the endpoint was captured and currently returned no configured records.

    ") + for site in site_summaries: + sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") + sections.append("

    Service Endpoint Summary

    ") + sections.append(_table(["Area", "Capture State"], _network_service_summary_rows(site, source))) + sections.append("

    WAN Interfaces

    ") + sections.append(_table(["Name", "State", "Type", "Addressing", "IP / Gateway", "DNS", "ID"], _wan_rows(_read_site_file(source, site, "wans")), "No WAN endpoint data captured.")) + sections.append("

    VPN Servers

    ") + sections.append(_table(["Name", "State", "Type", "Remote / Peer", "Network / Routes", "Origin", "ID"], _vpn_rows(_read_site_file(source, site, "vpn_servers")), "No VPN server records captured.")) + sections.append("

    VPN Tunnels

    ") + sections.append(_table(["Name", "State", "Type", "Remote / Peer", "Network / Routes", "Origin", "ID"], _vpn_rows(_read_site_file(source, site, "vpn_tunnels")), "No VPN tunnel records captured.")) + sections.append("

    RADIUS Profiles

    ") + sections.append(_table(["Name", "State", "Server", "Auth Port", "Accounting Port", "Origin", "ID"], _radius_rows(_read_site_file(source, site, "radius")), "No RADIUS profile records captured.")) + sections.append("

    Hotspot Vouchers

    ") + sections.append(_table(["Name / Code", "State", "Uses", "Duration", "Expires", "Origin", "ID"], _hotspot_rows(_read_site_file(source, site, "hotspot_vouchers")), "No hotspot voucher records captured.")) + sections.append("

    DNS Policies

    ") + sections.append(_table(["Name", "State", "Action / Type", "Networks", "Rules / Categories", "Origin", "ID"], _dns_policy_rows(_read_site_file(source, site, "dns_policies")), "No DNS policy records captured.")) + if not site_summaries: + sections.append("

    No local Network Application service backup detail captured.

    ") + sections.append("
    ") + + sections.append("

    13. Raw Backup Files

    ") files = sorted(str(p.relative_to(source)) for p in source.rglob("*.json")) sections.append(_table(["JSON backup"], [[f] for f in files], "No JSON backup files found.")) sections.append("
    ") @@ -1515,7 +1670,8 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: "6. Sites, Networks, VLANs, and DHCP", "8. Security Baseline", "11. Firewall and Policy Backup", - "12. Raw Backup Files", + "12. Network Services Backup", + "13. Raw Backup Files", ), ) @@ -1589,7 +1745,8 @@ def _html_shell( ("9", "Recommendations & Implementation Plan"), ("10", "Hardware Refresh & Budget Planning"), ("11", "Firewall and Policy Backup"), - ("12", "Raw Backup Files"), + ("12", "Network Services Backup"), + ("13", "Raw Backup Files"), ] toc_html = "".join( f'
  • {html.escape(str(number))}{html.escape(str(label))}
  • ' From f7e217292f63b5459e8116ab2523ee88b664c1c2 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 07:14:00 -0400 Subject: [PATCH 38/47] Enhance UniFi VLAN and address reporting --- tests/test_unifi_report.py | 26 ++++++- unifi/report.py | 150 +++++++++++++++++++++++++++++++++---- 2 files changed, 161 insertions(+), 15 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 0b096dd..328da44 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -81,11 +81,26 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): encoding="utf-8", ) (site_dir / "clients.json").write_text( - json.dumps([{"hostname": "client-1", "type": "WIRELESS", "ipAddress": "10.10.0.50", "uplinkDeviceId": "ap-1", "access": {"type": "DEFAULT"}}]), + json.dumps([{"hostname": "client-1", "type": "WIRELESS", "ipAddress": "10.100.0.50", "uplinkDeviceId": "ap-1", "access": {"type": "DEFAULT"}}]), encoding="utf-8", ) (site_dir / "networks.json").write_text( - json.dumps([{"name": "Staff", "vlanId": 100, "subnet": "10.100.0.0/16", "dhcpMode": "server", "zoneId": "zone-1", "metadata": {"origin": "USER_DEFINED"}}]), + json.dumps( + [ + { + "name": "Staff", + "vlanId": 100, + "subnet": "10.100.0.0/16", + "gateway": "10.100.0.1", + "dhcpMode": "server", + "dhcpRangeStart": "10.100.0.10", + "dhcpRangeEnd": "10.100.0.250", + "dnsServers": ["10.10.0.5"], + "zoneId": "zone-1", + "metadata": {"origin": "USER_DEFINED"}, + } + ] + ), encoding="utf-8", ) (site_dir / "wifi.json").write_text( @@ -203,6 +218,13 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "HTTP 404" in html assert "Configuration Backup Completeness" in html assert "Networks / VLANs" in html + assert "Configured Networks / VLANs" in html + assert "Observed Client Address Space" in html + assert "10.100.0.0/16" in html + assert "10.100.0.10 - 10.100.0.250" in html + assert "10.100.0.0/24" in html + assert "Staff (VLAN 100)" in html + assert "not an authoritative DHCP lease export" in html assert "0 / 2 available" in html assert "captured empty" in html assert "not exposed (HTTP 404)" in html diff --git a/unifi/report.py b/unifi/report.py index 240d153..ed665af 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -1,6 +1,7 @@ #!/usr/bin/env python3 import argparse import html +import ipaddress import json import os import re @@ -640,6 +641,136 @@ def _network_service_summary_rows(site: Dict[str, Any], source: Path) -> List[Li return rows +def _network_flags(netw: Dict[str, Any]) -> str: + flags: List[str] = [] + if _as_bool(netw.get("default")): + flags.append("default") + management = _first(netw, ("management",)) + if management: + flags.append(f"mgmt {management}") + return ", ".join(flags) + + +def _network_subnet(netw: Dict[str, Any]) -> str: + return _first(netw, ("subnet", "cidr", "ipSubnet", "ipv4Subnet", "networkAddress", "ipAddress")) + + +def _network_gateway(netw: Dict[str, Any]) -> str: + return _first(netw, ("gateway", "gatewayIp", "gatewayAddress", "routerIp")) + + +def _network_dhcp_mode(netw: Dict[str, Any]) -> str: + dhcp = netw.get("dhcp") + if isinstance(dhcp, dict): + return _first(dhcp, ("mode", "type", "enabled"), _first(netw, ("dhcpMode", "dhcpRelay", "dhcpServer"))) + return _first(netw, ("dhcpMode", "dhcpRelay", "dhcpServer", "dhcpEnabled")) + + +def _network_dhcp_range(netw: Dict[str, Any]) -> str: + dhcp = netw.get("dhcp") + if isinstance(dhcp, dict): + start = _first(dhcp, ("start", "rangeStart", "startAddress")) + end = _first(dhcp, ("end", "rangeEnd", "endAddress")) + else: + start = _first(netw, ("dhcpRangeStart", "dhcpStart", "rangeStart", "dhcpStartAddress")) + end = _first(netw, ("dhcpRangeEnd", "dhcpEnd", "rangeEnd", "dhcpEndAddress")) + if start and end: + return f"{start} - {end}" + return start or end + + +def _network_dns(netw: Dict[str, Any]) -> str: + dhcp = netw.get("dhcp") + if isinstance(dhcp, dict): + nested = _compact_value(dhcp.get("dnsServers") or dhcp.get("dns") or dhcp.get("nameservers")) + if nested: + return nested + return _compact_value(netw.get("dnsServers") or netw.get("dns") or netw.get("nameservers")) + + +def _network_rows(networks: List[Dict[str, Any]], zone_names: Dict[str, str]) -> List[List[Any]]: + rows: List[List[Any]] = [] + for netw in networks: + metadata_payload = netw.get("metadata") if isinstance(netw.get("metadata"), dict) else {} + rows.append( + [ + _first(netw, ("name", "displayName")), + _first(netw, ("vlanId", "vlan", "vlan_id")), + _yes_no(netw.get("enabled")), + _network_flags(netw), + _network_subnet(netw), + _network_gateway(netw), + _network_dhcp_mode(netw), + _network_dhcp_range(netw), + _network_dns(netw), + zone_names.get(str(netw.get("zoneId") or ""), _first(netw, ("zoneId",))), + _first(metadata_payload, ("origin",)), + ] + ) + return rows + + +def _client_ip(client: Dict[str, Any]) -> ipaddress.IPv4Address | ipaddress.IPv6Address | None: + raw = _first(client, ("ipAddress", "ip")) + if not raw: + return None + try: + return ipaddress.ip_address(raw) + except ValueError: + return None + + +def _network_for_ip(address: ipaddress.IPv4Address | ipaddress.IPv6Address, networks: List[Dict[str, Any]]) -> str: + for netw in networks: + subnet = _network_subnet(netw) + if not subnet: + continue + try: + parsed = ipaddress.ip_network(subnet, strict=False) + except ValueError: + continue + if address.version == parsed.version and address in parsed: + name = _first(netw, ("name", "displayName"), "Network") + vlan = _first(netw, ("vlanId", "vlan", "vlan_id")) + return f"{name} (VLAN {vlan})" if vlan else name + return "not matched to captured subnet" + + +def _client_address_observation_rows(clients: List[Dict[str, Any]], networks: List[Dict[str, Any]]) -> List[List[Any]]: + grouped: Dict[str, Dict[str, Any]] = {} + for client in clients: + address = _client_ip(client) + if not address or address.version != 4: + continue + prefix = ipaddress.ip_network(f"{address}/24", strict=False) + key = str(prefix) + row = grouped.setdefault(key, {"addresses": [], "types": {}, "uplinks": {}, "matched": ""}) + row["addresses"].append(address) + client_type = _first(client, ("type", "connectionType"), "Unknown") + row["types"][client_type] = row["types"].get(client_type, 0) + 1 + uplink = _first(client, ("uplinkDeviceName", "uplinkDeviceId", "uplinkDeviceMac"), "Unknown") + row["uplinks"][uplink] = row["uplinks"].get(uplink, 0) + 1 + row["matched"] = row["matched"] or _network_for_ip(address, networks) + + rows: List[List[Any]] = [] + for prefix, data in sorted(grouped.items(), key=lambda item: (-len(item[1]["addresses"]), item[0])): + addresses = sorted(data["addresses"]) + first_last = f"{addresses[0]} - {addresses[-1]}" if addresses else "" + top_uplinks = ", ".join(f"{key}: {value}" for key, value in list(sorted(data["uplinks"].items(), key=lambda kv: (-kv[1], kv[0])))[:3]) + rows.append( + [ + prefix, + len(addresses), + first_last, + _fmt_counts(dict(sorted(data["types"].items(), key=lambda kv: (-kv[1], kv[0])))), + data["matched"], + top_uplinks, + "observed clients only; not an authoritative DHCP lease export", + ] + ) + return rows + + def _wifi_network_label(wlan: Dict[str, Any]) -> str: network = wlan.get("network") if isinstance(network, dict): @@ -1476,24 +1607,17 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("
    ") sections.append("

    6. Sites, Networks, VLANs, and DHCP

    ") + sections.append("

    This section renders configured network/VLAN fields when the UniFi API exposes them, then separately summarizes observed client address space. The observed address table is useful for planning, but it is not a full DHCP lease export.

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or site.get('id') or 'Site'))}

    ") networks = _read_site_file(source, site, "networks") + clients = _read_site_file(source, site, "clients") zones = _read_site_file(source, site, "firewall_zones") zone_names = {str(zone.get("id")): str(zone.get("name") or zone.get("id")) for zone in zones if zone.get("id")} - rows = [] - for netw in networks: - metadata_payload = netw.get("metadata") if isinstance(netw.get("metadata"), dict) else {} - rows.append([ - _first(netw, ("name", "displayName")), - _first(netw, ("vlanId", "vlan", "vlan_id")), - _yes_no(netw.get("enabled")), - _yes_no(netw.get("default")), - _first(netw, ("management",)), - zone_names.get(str(netw.get("zoneId") or ""), _first(netw, ("zoneId",))), - _first(metadata_payload, ("origin",)), - ]) - sections.append(_table(["Network", "VLAN", "Enabled", "Default", "Management", "Zone", "Origin"], rows, "No network/VLAN endpoint data captured for this site.")) + sections.append("

    Configured Networks / VLANs

    ") + sections.append(_table(["Network", "VLAN", "Enabled", "Flags", "Subnet", "Gateway", "DHCP", "DHCP Range", "DNS", "Zone", "Origin"], _network_rows(networks, zone_names), "No network/VLAN endpoint data captured for this site.")) + sections.append("

    Observed Client Address Space

    ") + sections.append(_table(["Observed Prefix", "Clients", "Observed IP Range", "Client Mix", "Matched Network", "Top Uplinks", "Confidence"], _client_address_observation_rows(clients, networks), "No client IP addresses captured for this site.")) if not site_summaries: sections.append("

    No local Network Application site detail captured yet.

    ") sections.append("
    ") From f4b5308e7a671f84afbabe0342d26fd3401cdf5e Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 12:26:01 -0400 Subject: [PATCH 39/47] Polish UniFi report navigation and layout --- tests/test_unifi_report.py | 5 + unifi/report.py | 201 +++++++++++++++++++++++++++++++------ 2 files changed, 175 insertions(+), 31 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 328da44..5947aa7 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -168,6 +168,9 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): exec_html = Path(paths["exec_html"]).read_text(encoding="utf-8") backup_html = Path(paths["backup_html"]).read_text(encoding="utf-8") assert "TM UniFi Baseline" in html + assert 'class="cover-site">Main

    ' in html + assert 'href="#1-executive-summary"' in html + assert 'id="1-executive-summary"' in html assert "U7-Pro-1" in html assert "IW HD" in html assert "USW-48" in html @@ -224,6 +227,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "10.100.0.10 - 10.100.0.250" in html assert "10.100.0.0/24" in html assert "Staff (VLAN 100)" in html + assert "U7-Pro-1 (U7-Pro): 1" in html assert "not an authoritative DHCP lease export" in html assert "0 / 2 available" in html assert "captured empty" in html @@ -239,6 +243,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Network Services Backup" in backup_html assert "Hardware Refresh & Budget Planning" not in backup_html assert "Connected Clients" not in backup_html + assert "End of Report" in backup_html def test_unifi_profiles_discovers_numbered_site_profiles(monkeypatch): diff --git a/unifi/report.py b/unifi/report.py index ed665af..97ffe33 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -364,16 +364,40 @@ def _section_title(block: str) -> str: return html.unescape(re.sub(r"<[^>]+>", "", match.group(1))).strip() +def _section_anchor(title: str) -> str: + clean = title.lower().replace("&", " and ") + clean = re.sub(r"[^a-z0-9]+", "-", clean).strip("-") + return clean or "section" + + +def _anchor_sections(body: str) -> str: + seen: Dict[str, int] = {} + + def repl(match: re.Match[str]) -> str: + attrs = match.group(1) or "" + title = html.unescape(re.sub(r"<[^>]+>", "", match.group(2))).strip() + anchor = _section_anchor(title) + count = seen.get(anchor, 0) + seen[anchor] = count + 1 + if count: + anchor = f"{anchor}-{count + 1}" + if " id=" not in attrs: + attrs = f'{attrs} id="{html.escape(anchor, quote=True)}"' + return f"

    {match.group(2)}

    " + + return re.sub(r"]*)>\s*

    (.*?)

    ", repl, body, flags=re.DOTALL) + + def _select_sections(body: str, wanted_prefixes: Iterable[str]) -> str: prefixes = tuple(wanted_prefixes) - blocks = re.findall(r"
    .*?
    ", body, re.DOTALL) + blocks = re.findall(r"]*)?>.*?
    ", body, re.DOTALL) selected = [block for block in blocks if _section_title(block).startswith(prefixes)] return "\n".join(selected) -def _toc_items(section_body: str) -> List[tuple[str, str]]: - items: List[tuple[str, str]] = [] - for block in re.findall(r"
    .*?
    ", section_body, re.DOTALL): +def _toc_items(section_body: str) -> List[tuple[str, str, str]]: + items: List[tuple[str, str, str]] = [] + for block in re.findall(r"]*)?>.*?
    ", section_body, re.DOTALL): title = _section_title(block) if not title: continue @@ -381,7 +405,7 @@ def _toc_items(section_body: str) -> List[tuple[str, str]]: number, label = title.split(". ", 1) else: number, label = "Guide", title.replace("Guide. ", "") - items.append((number, label)) + items.append((number, label, _section_anchor(title))) return items @@ -641,6 +665,38 @@ def _network_service_summary_rows(site: Dict[str, Any], source: Path) -> List[Li return rows +def _display_from_safe_name(value: str) -> str: + cleaned = value.replace("_", " ").replace("-", " ").strip() + if not cleaned: + return "" + cleaned = re.sub(r"(?<=[a-z])(?=[A-Z])", " ", cleaned) + return " ".join(part.capitalize() if part.isupper() else part for part in cleaned.split()) + + +def _report_scope_label(source: Path, site_summaries: List[Dict[str, Any]], all_devices: List[Dict[str, Any]]) -> str: + if source.name not in {"latest", "backups"} and source.parent.name in {"sites", "backups"}: + derived = _display_from_safe_name(source.name) + if derived: + return derived + if len(site_summaries) == 1: + site_name = str(site_summaries[0].get("name") or "") + if site_name and site_name.lower() != "default": + return site_name + gateways = [device for device in all_devices if _device_role(device) == "Gateway" and _device_name(device) != "Unknown device"] + if gateways: + gateway_name = _display_from_safe_name(_device_name(gateways[0])) + if len(site_summaries) == 1: + site_name = str(site_summaries[0].get("name") or "") + if site_name: + return f"{gateway_name} / {site_name}" + return gateway_name + if len(site_summaries) == 1: + return str(site_summaries[0].get("name") or site_summaries[0].get("id") or "UniFi site") + if site_summaries: + return f"{len(site_summaries)} UniFi sites" + return "UniFi network" + + def _network_flags(netw: Dict[str, Any]) -> str: flags: List[str] = [] if _as_bool(netw.get("default")): @@ -736,7 +792,7 @@ def _network_for_ip(address: ipaddress.IPv4Address | ipaddress.IPv6Address, netw return "not matched to captured subnet" -def _client_address_observation_rows(clients: List[Dict[str, Any]], networks: List[Dict[str, Any]]) -> List[List[Any]]: +def _client_address_observation_rows(clients: List[Dict[str, Any]], networks: List[Dict[str, Any]], device_names: Dict[str, str]) -> List[List[Any]]: grouped: Dict[str, Dict[str, Any]] = {} for client in clients: address = _client_ip(client) @@ -748,7 +804,8 @@ def _client_address_observation_rows(clients: List[Dict[str, Any]], networks: Li row["addresses"].append(address) client_type = _first(client, ("type", "connectionType"), "Unknown") row["types"][client_type] = row["types"].get(client_type, 0) + 1 - uplink = _first(client, ("uplinkDeviceName", "uplinkDeviceId", "uplinkDeviceMac"), "Unknown") + uplink_raw = _first(client, ("uplinkDeviceName", "uplinkDeviceId", "uplinkDeviceMac"), "Unknown") + uplink = device_names.get(uplink_raw, uplink_raw) row["uplinks"][uplink] = row["uplinks"].get(uplink, 0) + 1 row["matched"] = row["matched"] or _network_for_ip(address, networks) @@ -1606,7 +1663,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(["Name", "Role", "Model", "Status", "Update", "IP", "MAC / ID", "Firmware"], device_rows)) sections.append("
    ") - sections.append("

    6. Sites, Networks, VLANs, and DHCP

    ") + sections.append("

    6. Sites, Networks, VLANs, and DHCP

    ") sections.append("

    This section renders configured network/VLAN fields when the UniFi API exposes them, then separately summarizes observed client address space. The observed address table is useful for planning, but it is not a full DHCP lease export.

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or site.get('id') or 'Site'))}

    ") @@ -1617,7 +1674,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("

    Configured Networks / VLANs

    ") sections.append(_table(["Network", "VLAN", "Enabled", "Flags", "Subnet", "Gateway", "DHCP", "DHCP Range", "DNS", "Zone", "Origin"], _network_rows(networks, zone_names), "No network/VLAN endpoint data captured for this site.")) sections.append("

    Observed Client Address Space

    ") - sections.append(_table(["Observed Prefix", "Clients", "Observed IP Range", "Client Mix", "Matched Network", "Top Uplinks", "Confidence"], _client_address_observation_rows(clients, networks), "No client IP addresses captured for this site.")) + sections.append(_table(["Observed Prefix", "Clients", "Observed IP Range", "Client Mix", "Matched Network", "Top Uplinks", "Confidence"], _client_address_observation_rows(clients, networks, device_names), "No client IP addresses captured for this site.")) if not site_summaries: sections.append("

    No local Network Application site detail captured yet.

    ") sections.append("
    ") @@ -1710,7 +1767,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("

    Not included: tax, freight, optics/transceivers, cabling, installation labor, configuration labor, licensing/subscription changes, contingency, or reseller/E-rate discounts.

    ") sections.append("
    ") - sections.append("

    11. Firewall and Policy Backup

    ") + sections.append("

    11. Firewall and Policy Backup

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") zones = _read_site_file(source, site, "firewall_zones") @@ -1757,7 +1814,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(headers, rows, f"No {label.lower()} endpoint data captured.")) sections.append("
    ") - sections.append("

    12. Network Services Backup

    ") + sections.append("

    12. Network Services Backup

    ") sections.append("

    This section renders service-oriented configuration that is already saved in the raw UniFi JSON backup. Empty tables are still useful because they document that the endpoint was captured and currently returned no configured records.

    ") for site in site_summaries: sections.append(f"

    {html.escape(str(site.get('name') or 'Site'))}

    ") @@ -1798,6 +1855,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: "13. Raw Backup Files", ), ) + scope_label = _report_scope_label(source, site_summaries, all_devices) html_doc = _html_shell( "TM UniFi Baseline", @@ -1805,6 +1863,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: metadata, report_title="UniFi Network Health & Backup Report", report_subtitle="Complete assessment, configuration evidence, and client visibility.", + scope_label=scope_label, ) exec_doc = _html_shell( "TM UniFi Executive Summary", @@ -1813,6 +1872,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: report_title="UniFi Executive Summary", report_subtitle="Leadership-ready risks, priorities, and data confidence.", toc_items=_toc_items(exec_body), + scope_label=scope_label, ) backup_doc = _html_shell( "TM UniFi Backup Settings", @@ -1821,6 +1881,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: report_title="UniFi Backup Settings Report", report_subtitle="Configuration backup coverage, security policy evidence, and raw JSON index.", toc_items=_toc_items(backup_body), + scope_label=scope_label, ) html_path = output / "report.html" pdf_path = output / "report.pdf" @@ -1851,30 +1912,46 @@ def _html_shell( *, report_title: str = "UniFi Network Health & Backup Report", report_subtitle: str = "TM UniFi Baseline", - toc_items: List[tuple[str, str]] | None = None, + toc_items: List[tuple[str, str, str]] | None = None, + scope_label: str = "UniFi network", ) -> str: release = datetime.now().strftime("%Y_%m_%d") collected = metadata.get("collectedAt") or "not captured" toc_items = toc_items or [ - ("1", "Executive Summary"), - ("Guide", "How to Use This Report"), - ("2", "Collection Coverage"), - ("3", "Network Overview"), - ("4", "Configuration Backup Completeness"), - ("5", "Device Health & Inventory"), - ("6", "Sites, Networks, VLANs, and DHCP"), - ("7", "WiFi and Client Visibility"), - ("7A", "Client Analysis"), - ("8", "Security Baseline"), - ("9", "Recommendations & Implementation Plan"), - ("10", "Hardware Refresh & Budget Planning"), - ("11", "Firewall and Policy Backup"), - ("12", "Network Services Backup"), - ("13", "Raw Backup Files"), + ("1", "Executive Summary", "1-executive-summary"), + ("Guide", "How to Use This Report", "guide-how-to-use-this-report"), + ("2", "Collection Coverage", "2-collection-coverage"), + ("3", "Network Overview", "3-network-overview"), + ("4", "Configuration Backup Completeness", "4-configuration-backup-completeness"), + ("5", "Device Health & Inventory", "5-device-health-and-inventory"), + ("6", "Sites, Networks, VLANs, and DHCP", "6-sites-networks-vlans-and-dhcp"), + ("7", "WiFi and Client Visibility", "7-wifi-and-client-visibility"), + ("7A", "Client Analysis", "7a-client-analysis"), + ("8", "Security Baseline", "8-security-baseline"), + ("9", "Recommendations & Implementation Plan", "9-recommendations-and-implementation-plan"), + ("10", "Hardware Refresh & Budget Planning", "10-hardware-refresh-and-budget-planning"), + ("11", "Firewall and Policy Backup", "11-firewall-and-policy-backup"), + ("12", "Network Services Backup", "12-network-services-backup"), + ("13", "Raw Backup Files", "13-raw-backup-files"), ] + body = _anchor_sections(body) + end_report = f""" +
    +
    +

    End of Report

    +

    {html.escape(report_title)}

    +

    {html.escape(scope_label)}

    +

    Prepared by Techmore. Release {html.escape(release)}.

    +
    +
    +""" toc_html = "".join( - f'
  • {html.escape(str(number))}{html.escape(str(label))}
  • ' - for number, label in toc_items + ( + f'
  • ' + f'{html.escape(str(number))}' + f'{html.escape(str(label))}
  • ' + ) + for number, label, anchor in toc_items ) return f""" @@ -1920,6 +1997,32 @@ def _html_shell( font-size: 8px; }} }} + @page wide-page {{ + size: A4 landscape; + margin: 12mm 8mm 10mm; + background: var(--olive-100); + @top-left {{ + content: "TM UNIFI BASELINE"; + color: #575d3d; + font-family: "Inter", system-ui, -apple-system, "Segoe UI", Helvetica, Arial, sans-serif; + font-size: 8px; + font-weight: 700; + letter-spacing: 0.12em; + text-transform: uppercase; + }} + @top-right {{ + content: "Release {release}"; + color: #78716c; + font-family: "Inter", system-ui, -apple-system, "Segoe UI", Helvetica, Arial, sans-serif; + font-size: 8px; + }} + @bottom-center {{ + content: "Page " counter(page) " of " counter(pages); + color: #78716c; + font-family: "Inter", system-ui, -apple-system, "Segoe UI", Helvetica, Arial, sans-serif; + font-size: 8px; + }} + }} :root {{ --bg: #eef0e6; --ink: #0f172a; @@ -1994,6 +2097,14 @@ def _html_shell( color: var(--olive-200); margin: 0 0 12px; }} + .cover-site {{ + font-size: 13px; + color: var(--olive-100); + margin: 0 0 10px; + letter-spacing: 0.08em; + text-transform: uppercase; + opacity: 0.86; + }} .cover-run-ts {{ font-size: 11px; color: var(--olive-200); @@ -2035,12 +2146,17 @@ def _html_shell( padding: 0; }} .toc-list li {{ - display: flex; - gap: 12px; padding: 7px 0; border-bottom: 1px solid var(--line); font-size: 12px; }} + .toc-link {{ + display: flex; + gap: 12px; + color: inherit; + text-decoration: none; + }} + .toc-link:hover span:last-child {{ color: var(--olive-700); text-decoration: underline; }} .toc-num {{ font-family: "Playfair Display", Georgia, "Times New Roman", serif; font-size: 14px; @@ -2126,6 +2242,27 @@ def _html_shell( .health-card--warn .health-card-stat {{ color: #b45309; }} .health-card--good .health-card-stat {{ color: #15803d; }} .health-card-detail {{ font-size: 9px; color: var(--muted); margin-top: 3px; }} + .wide-section {{ page: wide-page; page-break-before: always; }} + .wide-section table {{ font-size: 8.5px; }} + .wide-section th, .wide-section td {{ padding: 3px 4px; }} + .end-report {{ + page-break-before: always; + min-height: 220mm; + display: flex; + align-items: center; + justify-content: center; + text-align: center; + }} + .end-report-inner {{ + border-top: 2px solid var(--olive-400); + border-bottom: 2px solid var(--olive-400); + padding: 32px 80px; + }} + .end-report h2 {{ + border: 0; + margin: 0 0 12px; + font-size: 28px; + }} section {{ page-break-inside: auto; }} @@ -2136,6 +2273,7 @@ def _html_shell(
    Techmore

    {html.escape(report_title)}

    +

    {html.escape(scope_label)}

    {html.escape(report_subtitle)}

    Collected: {html.escape(str(collected))}

    @@ -2152,6 +2290,7 @@ def _html_shell(
    {body} + {end_report} """ From 8422ea4e6b34092d853243c11f3288ab9a832567 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 12:32:30 -0400 Subject: [PATCH 40/47] Clarify UniFi telemetry and firewall defaults --- tests/test_unifi_report.py | 6 +++ unifi/report.py | 91 ++++++++++++++++++++++++++++++++++---- 2 files changed, 89 insertions(+), 8 deletions(-) diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 5947aa7..4a1d479 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -204,6 +204,9 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "IP: 224.0.0.0/24" in html assert "IPv4 and IPv6; UDP" in html assert "Broad allow policies" in html + assert "system-defined" in html + assert "controller/system defaults" in html + assert "enabled user-defined broad allow" not in html assert "Network Services Backup" in html assert "Internet 1" in html assert "Corp VPN" in html @@ -213,6 +216,8 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Firmware" in html assert "Network Application version" in html assert "10.3.58" in html + assert "Telemetry Recovery Plan" in html + assert "API limitation" in html assert "Interface Telemetry Coverage" in html assert "ports, radios" in html assert "capability flag only" in html @@ -239,6 +244,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Firewall and Policy Backup" not in exec_html assert "UniFi Backup Settings Report" in backup_html assert "Configuration Backup Completeness" in backup_html + assert "Telemetry Recovery Plan" in backup_html assert "Firewall and Policy Backup" in backup_html assert "Network Services Backup" in backup_html assert "Hardware Refresh & Budget Planning" not in backup_html diff --git a/unifi/report.py b/unifi/report.py index 97ffe33..d2c053e 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -523,13 +523,47 @@ def _policy_origin_label(policy: Dict[str, Any]) -> str: return "" -def _broad_allow_policy_count(policies: Iterable[Dict[str, Any]]) -> int: - count = 0 +def _policy_origin_group(policy: Dict[str, Any]) -> str: + origin = _policy_origin_label(policy) or _first(policy, ("origin", "source")) + origin = origin.upper().strip() + if origin in {"SYSTEM_DEFINED", "SYSTEM", "DEFAULT"}: + return "system" + if origin in {"USER_DEFINED", "USER", "CUSTOM"}: + return "user" + return "other" + + +def _is_broad_allow_policy(policy: Dict[str, Any]) -> bool: + name = _first(policy, ("name", "description", "id")).strip().lower() + return _as_bool(policy.get("enabled")) and _action_label(policy).upper().startswith("ALLOW") and "allow all" in name + + +def _broad_allow_policy_summary(policies: Iterable[Dict[str, Any]]) -> Dict[str, int]: + summary = {"total": 0, "system": 0, "user": 0, "other": 0} for policy in policies: - name = _first(policy, ("name", "description", "id")).strip().lower() - if _as_bool(policy.get("enabled")) and _action_label(policy).upper().startswith("ALLOW") and "allow all" in name: - count += 1 - return count + if not _is_broad_allow_policy(policy): + continue + origin = _policy_origin_group(policy) + summary["total"] += 1 + summary[origin] = summary.get(origin, 0) + 1 + return summary + + +def _broad_allow_policy_interpretation(summary: Dict[str, int]) -> str: + total = summary.get("total", 0) + if not total: + return "No enabled broad allow policies detected by policy name/action." + parts = [] + for key, label in (("system", "system-defined"), ("user", "user-defined"), ("other", "unknown-origin")): + count = summary.get(key, 0) + if count: + parts.append(f"{count} {label}") + detail = ", ".join(parts) + if summary.get("user", 0): + return f"{_plural(total, 'enabled broad allow policy', 'enabled broad allow policies')} detected ({detail}). Review user-defined broad allows first, then validate default zone posture." + if summary.get("system", 0): + return f"{_plural(total, 'enabled broad allow policy', 'enabled broad allow policies')} detected ({detail}). These appear to be controller/system defaults; validate zone posture before treating them as custom risk." + return f"{_plural(total, 'enabled broad allow policy', 'enabled broad allow policies')} detected ({detail}). Validate policy origin before treating them as intended defaults." def _item_origin_label(item: Dict[str, Any]) -> str: @@ -954,6 +988,31 @@ def _telemetry_gap_summary(telemetry_probes: List[Dict[str, Any]]) -> str: return f"0 of {total} telemetry probe endpoint(s) returned data; observed statuses: {', '.join(statuses)}." +def _telemetry_recovery_rows(telemetry_probes: List[Dict[str, Any]], net: Dict[str, Any]) -> List[List[Any]]: + connection = _first(net, ("connectionType",), "configured") + if not telemetry_probes: + return [ + ["Current API telemetry", "Not captured", "No detailed switch-port or AP-radio telemetry probes were saved in this run."], + ["Best recovery source", "Recommended", "Export UniFi Network support data or controller UI screenshots for switch ports, PoE draw, AP channel, AP power, and RF utilization before final migration planning."], + ] + + available = sum(1 for probe in telemetry_probes if probe.get("available")) + total = len(telemetry_probes) + if available: + return [ + ["Current API telemetry", "Partial", f"{available} of {total} detailed telemetry probes returned data from the {connection} Network Integration API path."], + ["Planning caution", "Validate", "Use captured telemetry where present, but field-check missing switch-port and AP-radio details before final port maps, PoE budgets, or RF recommendations."], + ["Next automation step", "Keep enabled", "Leave telemetry probes in the run so this report automatically improves when the controller/API exposes more structured metrics."], + ] + + return [ + ["Current API telemetry", "Low", f"0 of {total} detailed telemetry probes returned data from the {connection} Network Integration API path."], + ["What this means", "API limitation", "Inventory, clients, VLANs, WiFi, and firewall backup can still be useful; this capture cannot validate per-port PoE draw, link speed, AP channel utilization, or RF utilization by itself."], + ["Best recovery source", "Recommended", "Export UniFi Network support data or controller UI screenshots for switch ports, PoE draw, AP channel, AP power, and RF utilization before final migration planning."], + ["Next automation step", "Keep enabled", "Leave telemetry probes in the run so this report automatically improves when the controller/API exposes more structured metrics."], + ] + + def _wifi_security_weak(wifi: Iterable[Dict[str, Any]]) -> List[str]: weak: List[str] = [] for wlan in wifi: @@ -1018,6 +1077,9 @@ def _top_risks( logging_disabled = sum(1 for policy in all_firewall_policies if not _as_bool(policy.get("loggingEnabled"))) if logging_disabled: risks.append(f"Firewall visibility may be limited - {_plural(logging_disabled, 'captured firewall policy', 'captured firewall policies')} have logging disabled.") + broad_allow = _broad_allow_policy_summary(all_firewall_policies) + if broad_allow.get("user", 0): + risks.append(f"User-defined broad allow policies require review - {_plural(broad_allow['user'], 'enabled user-defined broad allow policy', 'enabled user-defined broad allow policies')} detected by policy name/action.") else: risks.append("No firewall policies were captured; do not treat this run as a complete security backup until policy endpoint access is validated.") @@ -1085,13 +1147,19 @@ def _security_baseline_rows( ) -> List[List[Any]]: weak_wifi = _wifi_security_weak(all_wifi) logging_enabled = sum(1 for policy in all_firewall_policies if _as_bool(policy.get("loggingEnabled"))) - broad_allow = _broad_allow_policy_count(all_firewall_policies) + broad_allow = _broad_allow_policy_summary(all_firewall_policies) + if broad_allow.get("user", 0): + broad_allow_status = "Review" + elif broad_allow.get("total", 0): + broad_allow_status = "Document" + else: + broad_allow_status = "Not detected" return [ ["Network segmentation", "Review" if network_count <= 2 else "Present", f"{_plural(network_count, 'network/VLAN definition')} captured."], ["Wireless authentication", "Review" if weak_wifi else ("Present" if all_wifi else "Missing"), "; ".join(weak_wifi[:2]) if weak_wifi else f"{_plural(len(all_wifi), 'SSID')} captured."], ["Firewall rules", "Present" if all_firewall_policies else "Missing", f"{_plural(len(all_firewall_policies), 'policy', 'policies')} captured."], ["Firewall logging", "Review" if all_firewall_policies and logging_enabled < len(all_firewall_policies) else "Present", f"{logging_enabled} of {len(all_firewall_policies)} policies have logging enabled."], - ["Broad allow policies", "Review" if broad_allow else "Not detected", f"{_plural(broad_allow, 'enabled broad allow policy', 'enabled broad allow policies')} detected by policy name/action."], + ["Broad allow policies", broad_allow_status, _broad_allow_policy_interpretation(broad_allow)], ["DNS filtering policy", "Missing" if not all_dns_policies else "Present", f"{_plural(len(all_dns_policies), 'DNS policy', 'DNS policies')} captured."], ["RADIUS / identity", "Present" if all_radius else "Not captured", f"{_plural(len(all_radius), 'RADIUS profile')} captured."], ] @@ -1603,6 +1671,13 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: if auth_guidance: sections.append("

    Credential / Access Fix

    ") sections.append("
      " + "".join(f"
    • {html.escape(item)}
    • " for item in auth_guidance) + "
    ") + sections.append("

    Telemetry Recovery Plan

    ") + sections.append( + _table( + ["Area", "Status", "Action / Interpretation"], + _telemetry_recovery_rows(telemetry_probes, net), + ) + ) sections.append("
    ") sections.append("

    3. Network Overview

    ") From 3c5eb49c3dbf42db83624d0bb7aaedbafb9a1eac Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 12:39:01 -0400 Subject: [PATCH 41/47] Improve UniFi telemetry probe attribution --- ROADMAP.md | 2 + tests/test_unifi_report.py | 24 ++++++++++-- unifi/collect.py | 80 +++++++++++++++++++++++++++++++++----- unifi/report.py | 7 +++- 4 files changed, 99 insertions(+), 14 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index 597622e..6e13784 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -122,6 +122,8 @@ This project is currently functional as a Python reporting pipeline. The immedia structured coverage evidence in the backup/report.~~ - ~~Add a UniFi configuration backup completeness matrix showing captured, captured-empty, and unsupported endpoint coverage.~~ +- ~~Split UniFi per-device telemetry probes by sampled AP, switch, and gateway + roles so future exposed endpoints can be attributed to the right hardware.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 4a1d479..226ac36 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -63,7 +63,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): "hotspot_vouchers": 1, "vpn_tunnels": 0, "telemetry_probe_available": 0, - "telemetry_probe_total": 2, + "telemetry_probe_total": 3, }, } ] @@ -156,6 +156,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): [ {"label": "site_ports", "purpose": "Per-site switch port telemetry", "path": "/ports", "available": False, "status": 404, "itemCount": 0}, {"label": "wireless_radios", "purpose": "Wireless radio telemetry", "path": "/wireless/radios", "available": False, "status": 404, "itemCount": 0}, + {"label": "device_ports_switch", "purpose": "Per-switch port telemetry", "path": "/devices/switch-1/ports", "available": False, "status": 404, "itemCount": 0, "sampleDevice": "USW-48 (USW-Pro-48-PoE)", "role": "switch"}, ] ), encoding="utf-8", @@ -223,6 +224,8 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "capability flag only" in html assert "API Telemetry Probe Results" in html assert "site_ports" in html + assert "device_ports_switch" in html + assert "USW-48 (USW-Pro-48-PoE) [switch]" in html assert "HTTP 404" in html assert "Configuration Backup Completeness" in html assert "Networks / VLANs" in html @@ -234,7 +237,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Staff (VLAN 100)" in html assert "U7-Pro-1 (U7-Pro): 1" in html assert "not an authoritative DHCP lease export" in html - assert "0 / 2 available" in html + assert "0 / 3 available" in html assert "captured empty" in html assert "not exposed (HTTP 404)" in html assert "UniFi Executive Summary" in exec_html @@ -378,6 +381,8 @@ class ProbeClient: def get_json(self, path, params=None): if path.endswith("/sites/site-1/ports"): return {"data": [{"port": 1}, {"port": 2}]} + if path.endswith("/devices/switch-1/ports"): + return {"data": [{"port": 1}]} raise UniFiRequestError("HTTP 404", status=404) results = _collect_telemetry_probes( @@ -385,7 +390,13 @@ def get_json(self, path, params=None): "/network", "site-1", "Main", - [{"id": "device-1", "interfaces": ["ports", "radios"]}], + [ + {"id": "combo-1", "name": "IW HD", "model": "IW HD", "state": "ONLINE", "interfaces": ["ports", "radios"], "features": ["switching", "accessPoint"]}, + {"id": "ap-1", "name": "U7-Pro-1", "model": "U7-Pro", "interfaces": ["ports", "radios"], "features": ["accessPoint"]}, + {"id": "old-switch", "name": "Old Switch", "model": "USW Flex", "state": "OFFLINE", "interfaces": ["ports"], "features": ["switching"]}, + {"id": "switch-1", "name": "USW-48", "model": "USW-Pro-48-PoE", "state": "ONLINE", "interfaces": ["ports"], "features": ["switching"]}, + {"id": "gateway-1", "name": "Gateway", "model": "UCG Ultra", "interfaces": ["ports"], "features": ["switching"]}, + ], tmp_path, ) by_label = {result["label"]: result for result in results} @@ -394,4 +405,9 @@ def get_json(self, path, params=None): assert by_label["site_ports"]["itemCount"] == 2 assert (tmp_path / by_label["site_ports"]["file"]).exists() assert by_label["site_radios"]["status"] == 404 - assert by_label["device_ports"]["path"].endswith("/devices/device-1/ports") + assert by_label["device_ports_switch"]["path"].endswith("/devices/switch-1/ports") + assert by_label["device_ports_switch"]["available"] is True + assert by_label["device_ports_switch"]["sampleDevice"] == "USW-48 (USW-Pro-48-PoE)" + assert by_label["device_ports_gateway"]["path"].endswith("/devices/gateway-1/ports") + assert by_label["device_ports_ap"]["path"].endswith("/devices/ap-1/ports") + assert by_label["device_radios_ap"]["path"].endswith("/devices/ap-1/radios") diff --git a/unifi/collect.py b/unifi/collect.py index 64cfecb..d6958ec 100644 --- a/unifi/collect.py +++ b/unifi/collect.py @@ -38,8 +38,10 @@ {"label": "wifi_radio_settings", "scope": "site", "suffix": "wifi/radio-settings", "purpose": "WiFi radio settings"}, {"label": "wifi_rf_environments", "scope": "site", "suffix": "wifi/rf-environments", "purpose": "RF environment telemetry"}, {"label": "wifi_channel_plans", "scope": "site", "suffix": "wifi/channel-plans", "purpose": "Channel plan telemetry"}, - {"label": "device_ports", "scope": "device", "interface": "ports", "suffix": "devices/{device_id}/ports", "purpose": "Per-device port telemetry"}, - {"label": "device_radios", "scope": "device", "interface": "radios", "suffix": "devices/{device_id}/radios", "purpose": "Per-device radio telemetry"}, + {"label": "device_ports_switch", "scope": "device", "role": "switch", "interface": "ports", "suffix": "devices/{device_id}/ports", "purpose": "Per-switch port telemetry"}, + {"label": "device_ports_gateway", "scope": "device", "role": "gateway", "interface": "ports", "suffix": "devices/{device_id}/ports", "purpose": "Per-gateway port telemetry"}, + {"label": "device_ports_ap", "scope": "device", "role": "access_point", "interface": "ports", "suffix": "devices/{device_id}/ports", "purpose": "Per-AP uplink/embedded port telemetry"}, + {"label": "device_radios_ap", "scope": "device", "role": "access_point", "interface": "radios", "suffix": "devices/{device_id}/radios", "purpose": "Per-AP radio telemetry"}, ) @@ -99,16 +101,69 @@ def _site_matches(site: Dict[str, Any], selector: str) -> bool: return wanted in {value.strip().lower() for value in values if value} -def _device_with_interface(devices: Iterable[Dict[str, Any]], interface: str) -> Dict[str, Any] | None: +def _device_label(device: Dict[str, Any]) -> str: + name = str(device.get("name") or device.get("displayName") or device.get("id") or device.get("_id") or "device") + model = str(device.get("model") or "").strip() + return f"{name} ({model})" if model and model not in name else name + + +def _device_text(device: Dict[str, Any]) -> str: + fields = [str(device.get(key) or "") for key in ("type", "role", "model", "name")] + features = device.get("features") + if isinstance(features, list): + fields.extend(str(item) for item in features if item) + elif features: + fields.append(str(features)) + return " ".join(fields).lower() + + +def _device_matches_role(device: Dict[str, Any], role: str) -> bool: + role = role.strip().lower() + if not role: + return True + text = _device_text(device) + features = {str(item).strip().lower() for item in device.get("features") or [] if item} if isinstance(device.get("features"), list) else set() + if role == "access_point": + return "accesspoint" in features or "access point" in text + if role == "gateway": + return any(token in text for token in ("gateway", "ucg", "udm", "uxg", "dream machine")) + if role == "switch": + if any(token in text for token in ("gateway", "ucg", "udm", "uxg", "dream machine")): + return False + return "switching" in features or "switch" in text or "usw" in text + return role in text + + +def _device_online_score(device: Dict[str, Any]) -> int: + state = str(device.get("state") or device.get("status") or "").strip().lower() + if not state: + return 0 + return 0 if state in {"online", "connected", "active"} else 1 + + +def _device_role_score(device: Dict[str, Any], role: str) -> int: + text = _device_text(device) + features = {str(item).strip().lower() for item in device.get("features") or [] if item} if isinstance(device.get("features"), list) else set() + if role == "switch": + return 1 if "accesspoint" in features or "access point" in text else 0 + if role == "access_point": + return 1 if "switching" in features or "switch" in text else 0 + return 0 + + +def _device_with_interface(devices: Iterable[Dict[str, Any]], interface: str, role: str = "") -> Dict[str, Any] | None: wanted = interface.strip().lower() + candidates: List[Dict[str, Any]] = [] for device in devices: interfaces = device.get("interfaces") if not isinstance(interfaces, list): continue available = {str(item).strip().lower() for item in interfaces if item} - if wanted in available and (device.get("id") or device.get("_id")): - return device - return None + if wanted in available and (device.get("id") or device.get("_id")) and _device_matches_role(device, role): + candidates.append(device) + if not candidates: + return None + return sorted(candidates, key=lambda device: (_device_online_score(device), _device_role_score(device, role), _device_label(device)))[0] def _payload_count(payload: Any) -> int: @@ -178,8 +233,9 @@ def _collect_telemetry_probes(client: UniFiClient, network_prefix: str, site_id: label = probe["label"] suffix = probe["suffix"] if probe.get("scope") == "device": - device = _device_with_interface(device_items, probe.get("interface", "")) + device = _device_with_interface(device_items, probe.get("interface", ""), probe.get("role", "")) if not device: + role = probe.get("role", "device").replace("_", " ") results.append( { "label": label, @@ -188,14 +244,20 @@ def _collect_telemetry_probes(client: UniFiClient, network_prefix: str, site_id: "available": False, "status": None, "itemCount": 0, - "note": f"No sampled device advertises {probe.get('interface')} interface capability.", + "role": probe.get("role", ""), + "note": f"No sampled {role} device advertises {probe.get('interface')} interface capability.", } ) continue device_id = str(device.get("id") or device.get("_id")) suffix = suffix.format(device_id=device_id) path = f"{network_prefix}/sites/{site_id}/{suffix}" - results.append(_probe_telemetry_endpoint(client, path, label=label, purpose=probe.get("purpose", ""), output=output, safe=safe)) + result = _probe_telemetry_endpoint(client, path, label=label, purpose=probe.get("purpose", ""), output=output, safe=safe) + if probe.get("scope") == "device": + result["role"] = probe.get("role", "") + result["sampleDevice"] = _device_label(device) + result["sampleDeviceId"] = str(device.get("id") or device.get("_id") or "") + results.append(result) return results diff --git a/unifi/report.py b/unifi/report.py index d2c053e..299ac1d 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -238,12 +238,17 @@ def _probe_status_summary(probes: Iterable[Dict[str, Any]], terms: Iterable[str] def _probe_rows(probes: Iterable[Dict[str, Any]]) -> List[List[Any]]: rows: List[List[Any]] = [] for probe in probes: + sample = _first(probe, ("sampleDevice", "note")) + role = _first(probe, ("role",)) + if sample and role: + sample = f"{sample} [{role.replace('_', ' ')}]" rows.append( [ probe.get("label", ""), _probe_status_label(probe), _yes_no(probe.get("available")), probe.get("itemCount", 0), + sample, probe.get("purpose") or probe.get("note") or "", ] ) @@ -1722,7 +1727,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_table(["Device", "Model", "Features", "Interfaces", "Detail"], _interface_device_rows(all_devices), "No device interface coverage captured.")) if telemetry_probes: sections.append("

    API Telemetry Probe Results

    ") - sections.append(_table(["Probe", "Status", "Available", "Items", "Purpose"], _probe_rows(telemetry_probes), "No telemetry probes captured.")) + sections.append(_table(["Probe", "Status", "Available", "Items", "Sample / Note", "Purpose"], _probe_rows(telemetry_probes), "No telemetry probes captured.")) device_rows = [] for dev in all_devices[:300]: device_rows.append([ From 4cf91e6bfd5b788de4db2f30be4bb5ead225bc44 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 12:44:12 -0400 Subject: [PATCH 42/47] Clarify UniFi hardware planning scope --- ROADMAP.md | 2 + tests/test_unifi_report.py | 9 ++++- unifi/report.py | 79 ++++++++++++++++++++++++++++++++++---- 3 files changed, 81 insertions(+), 9 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index 6e13784..5f2a264 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -124,6 +124,8 @@ This project is currently functional as a Python reporting pipeline. The immedia captured-empty, and unsupported endpoint coverage.~~ - ~~Split UniFi per-device telemetry probes by sampled AP, switch, and gateway roles so future exposed endpoints can be attributed to the right hardware.~~ +- ~~Clarify UniFi hardware planning so retained active gear is not counted as + unpriced refresh scope, and summarize refresh/retain/excluded actions.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 226ac36..d164991 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -50,7 +50,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): "telemetry_probe": "sites/Main/telemetry_probe.json", }, "counts": { - "devices": 3, + "devices": 4, "clients": 1, "networks": 1, "wifi": 1, @@ -76,6 +76,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): {"id": "ap-1", "name": "U7-Pro-1", "model": "U7-Pro", "type": "access point", "state": "ONLINE", "ipAddress": "10.1.1.10", "interfaces": ["ports", "radios"], "features": ["accessPoint"]}, {"name": "IW HD", "model": "IW HD", "features": ["switching", "accessPoint"], "interfaces": ["ports", "radios"], "state": "ONLINE", "ipAddress": "10.1.1.11"}, {"name": "USW-48", "model": "USW-Pro-48-PoE", "type": "switch", "state": "ONLINE", "ipAddress": "10.1.1.20", "interfaces": ["ports"], "features": ["switching"]}, + {"name": "USW Flex 2.5G 5", "model": "USW Flex 2.5G 5", "type": "switch", "state": "ONLINE", "ipAddress": "10.1.1.21", "interfaces": ["ports"], "features": ["switching"]}, ] ), encoding="utf-8", @@ -194,6 +195,12 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Recommendations & Implementation Plan" in html assert "Choose a deeper diagnostics source" in html assert "Hardware Refresh & Budget Planning" in html + assert "Refresh Action Summary" in html + assert "Unpriced refresh candidates" in html + assert "Retain / monitor" in html + assert "Not in refresh scope" in html + assert "Not quoted" in html + assert "Small UniFi switch or edge form factor" in html assert "Model-Level Refresh Planning" in html assert "U7 Pro" in html assert "Pro 48 PoE" in html diff --git a/unifi/report.py b/unifi/report.py index 299ac1d..e5373a4 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -1281,6 +1281,8 @@ def _refresh_product_key(device: Dict[str, Any], legacy_aps: List[str]) -> tuple return "USW-Pro-48-POE", "Refresh candidate", "48-port access switch reference; validate PoE budget and uplinks." if "24" in model_l: return "USW-Pro-24-POE", "Refresh candidate", "24-port access switch reference; validate PoE budget and uplinks." + if any(token in model_l for token in ("flex", "lite", "mini", "usw-8", "usw 8", "enterprise 8")): + return "", "Retain / monitor", "Small UniFi switch or edge form factor; keep out of the replacement subtotal unless capacity, PoE, or uplink requirements change." return "", "Pricing needed", "Small switch or special form factor; add exact replacement SKU to pricing reference before quoting." if role == "Gateway": if any(token in model_l for token in ("ucg", "udm", "cloud gateway")): @@ -1289,6 +1291,30 @@ def _refresh_product_key(device: Dict[str, Any], legacy_aps: List[str]) -> tuple return "", "Review manually", "Device role did not map to a maintained replacement class." +def _action_requires_pricing(action: str) -> bool: + return action in {"Refresh candidate", "Pricing needed", "Review manually"} + + +def _planning_product_label(product: Dict[str, Any], product_key: str, action: str) -> str: + if product_key: + return _product_name(product, product_key) + if action == "Retain / monitor": + return "Not in refresh scope" + if action == "Excluded pending validation": + return "Excluded pending validation" + return "Pricing needed" + + +def _planning_money(value: int | float | None, action: str) -> str: + if isinstance(value, (int, float)): + return _money(value) + if action == "Retain / monitor": + return "Not quoted" + if action == "Excluded pending validation": + return "Excluded" + return _money(value) + + def _hardware_refresh_rows( devices: List[Dict[str, Any]], legacy_aps: List[str], @@ -1305,7 +1331,7 @@ def _hardware_refresh_rows( note = "Offline/inactive in controller; validate physical inventory before quoting replacement." product_key = "" product = _pricing_product(pricing, product_key) - product_label = _product_name(product, product_key) if product_key else "Pricing needed" + product_label = _planning_product_label(product, product_key, action) key = (model, role, product_key, action, note) row = grouped.setdefault( key, @@ -1329,22 +1355,31 @@ def _hardware_refresh_rows( row["excluded"] += 1 rows: List[List[Any]] = [] - totals = {"hardware": 0.0, "care": 0.0, "priced_active": 0, "unpriced_active": 0, "excluded": 0} + totals: Dict[str, Any] = {"hardware": 0.0, "care": 0.0, "priced_active": 0, "unpriced_active": 0, "excluded": 0, "actions": {}} for row in sorted(grouped.values(), key=lambda item: (str(item["role"]), str(item["model"]))): active = int(row["active"]) excluded = int(row["excluded"]) unit = row["unit"] care = row["care"] + action = str(row["action"]) hardware_total = unit * active if isinstance(unit, (int, float)) and active else None care_total = care * active if isinstance(care, (int, float)) and active else None if hardware_total is not None: totals["hardware"] += hardware_total totals["priced_active"] += active - elif active: + elif active and _action_requires_pricing(action): totals["unpriced_active"] += active if care_total is not None: totals["care"] += care_total totals["excluded"] += excluded + action_totals = totals["actions"].setdefault(action, {"inventory": 0, "active": 0, "excluded": 0, "hardware": 0.0, "care": 0.0}) + action_totals["inventory"] += int(row["inventory"]) + action_totals["active"] += active + action_totals["excluded"] += excluded + if hardware_total is not None: + action_totals["hardware"] += hardware_total + if care_total is not None: + action_totals["care"] += care_total rows.append( [ row["model"], @@ -1353,23 +1388,49 @@ def _hardware_refresh_rows( active, excluded, row["product"], - row["action"], - _money(unit), - _money(care), - _money(hardware_total), + action, + _planning_money(unit, action), + _planning_money(care, action), + _planning_money(hardware_total, action), row["note"], ] ) return rows, totals +def _hardware_action_summary_rows(totals: Dict[str, Any]) -> List[List[Any]]: + actions = totals.get("actions") if isinstance(totals.get("actions"), dict) else {} + interpretations = { + "Refresh candidate": "Included in the planning subtotal when a maintained reference product is mapped.", + "Retain / monitor": "Active equipment that is not currently being replaced in this planning subtotal.", + "Pricing needed": "Active equipment that needs a specific SKU or design decision before budget use.", + "Excluded pending validation": "Offline/inactive equipment excluded until physical inventory is confirmed.", + "Review manually": "Inventory that did not map cleanly to a known role or replacement class.", + } + order = {"Refresh candidate": 0, "Pricing needed": 1, "Review manually": 2, "Retain / monitor": 3, "Excluded pending validation": 4} + rows: List[List[Any]] = [] + for action, values in sorted(actions.items(), key=lambda item: (order.get(str(item[0]), 99), str(item[0]))): + if not isinstance(values, dict): + continue + rows.append( + [ + action, + int(values.get("active") or 0), + int(values.get("excluded") or 0), + _money(values.get("hardware")), + interpretations.get(str(action), "Planning classification from captured inventory."), + ] + ) + return rows + + def _hardware_summary_rows(pricing: Dict[str, Any], totals: Dict[str, Any]) -> List[List[Any]]: meta = pricing.get("meta") if isinstance(pricing.get("meta"), dict) else {} notes = meta.get("notes") if isinstance(meta.get("notes"), list) else [] return [ ["Reference catalog", str(meta.get("name") or "pricing_reference.json"), f"Updated {meta.get('updated') or 'unknown'}; currency {meta.get('currency') or 'USD'}."], ["Priced active devices", str(int(totals.get("priced_active") or 0)), "Only online devices with maintained product mappings are included in the subtotal."], - ["Unpriced active devices", str(int(totals.get("unpriced_active") or 0)), "These need exact SKU mapping before client-facing budget use."], + ["Unpriced refresh candidates", str(int(totals.get("unpriced_active") or 0)), "Active devices that require a replacement SKU or manual mapping before client-facing budget use."], ["Excluded devices", str(int(totals.get("excluded") or 0)), "Offline/inactive devices are excluded until field-validated."], ["Hardware subtotal", _money(totals.get("hardware")), "Public-reference hardware subtotal for priced active mapped devices."], ["Optional UI Care 5-year", _money(totals.get("care")), "Shown separately from hardware so support decisions stay explicit."], @@ -1838,6 +1899,8 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("

    This section uses the maintained pricing reference to create a planning-only hardware refresh view. It prices only online devices that map cleanly to a maintained product reference; offline devices and special form factors stay excluded or marked pricing-needed until field validation.

    ") sections.append("

    Planning Summary

    ") sections.append(_table(["Area", "Value", "Interpretation"], _hardware_summary_rows(pricing, hardware_totals))) + sections.append("

    Refresh Action Summary

    ") + sections.append(_table(["Action", "Active Devices", "Excluded", "Hardware Subtotal", "Interpretation"], _hardware_action_summary_rows(hardware_totals), "No hardware planning actions generated.")) sections.append("

    Model-Level Refresh Planning

    ") sections.append(_table(["Current Model", "Role", "Inventory", "Active", "Excluded", "Reference Product", "Action", "Unit", "UI Care / Unit", "Hardware Total", "Notes"], hardware_rows, "No device inventory was available for hardware planning.")) catalog_rows = _catalog_reference_rows(pricing) From bf5e0f5c37d8b6870240bf4ce5ecc74b74edfba6 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 14:07:46 -0400 Subject: [PATCH 43/47] Surface UniFi client concentration risks --- ROADMAP.md | 2 ++ tests/test_unifi_report.py | 14 +++++++++++--- unifi/report.py | 36 ++++++++++++++++++++++++++++++++++++ 3 files changed, 49 insertions(+), 3 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index 5f2a264..02ae9ff 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -126,6 +126,8 @@ This project is currently functional as a Python reporting pipeline. The immedia roles so future exposed endpoints can be attributed to the right hardware.~~ - ~~Clarify UniFi hardware planning so retained active gear is not counted as unpriced refresh scope, and summarize refresh/retain/excluded actions.~~ +- ~~Promote high client concentration on one AP/switch into UniFi executive + risks, priorities, and implementation planning.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index d164991..c1a8e75 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -51,7 +51,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): }, "counts": { "devices": 4, - "clients": 1, + "clients": 10, "networks": 1, "wifi": 1, "firewall_zones": 1, @@ -82,7 +82,12 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): encoding="utf-8", ) (site_dir / "clients.json").write_text( - json.dumps([{"hostname": "client-1", "type": "WIRELESS", "ipAddress": "10.100.0.50", "uplinkDeviceId": "ap-1", "access": {"type": "DEFAULT"}}]), + json.dumps( + [ + {"hostname": f"client-{i}", "type": "WIRELESS", "ipAddress": f"10.100.0.{50 + i}", "uplinkDeviceId": "ap-1", "access": {"type": "DEFAULT"}} + for i in range(10) + ] + ), encoding="utf-8", ) (site_dir / "networks.json").write_text( @@ -184,6 +189,8 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Current State Assessment" in html assert "Top Operational Risks" in html assert "Recommended Priorities" in html + assert "Client concentration requires validation" in html + assert "Validate client concentration" in html assert "Data Confidence Snapshot" in html assert "Health at a Glance" in html assert "How to Use This Report" in html @@ -193,6 +200,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Client Overview Summary" in html assert "Client Concentration by Uplink" in html assert "Recommendations & Implementation Plan" in html + assert "Validate concentrated client load" in html assert "Choose a deeper diagnostics source" in html assert "Hardware Refresh & Budget Planning" in html assert "Refresh Action Summary" in html @@ -242,7 +250,7 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "10.100.0.10 - 10.100.0.250" in html assert "10.100.0.0/24" in html assert "Staff (VLAN 100)" in html - assert "U7-Pro-1 (U7-Pro): 1" in html + assert "U7-Pro-1 (U7-Pro): 10" in html assert "not an authoritative DHCP lease export" in html assert "0 / 3 available" in html assert "captured empty" in html diff --git a/unifi/report.py b/unifi/report.py index e5373a4..884e8ca 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -1057,10 +1057,24 @@ def _client_age_buckets(clients: Iterable[Dict[str, Any]], now: datetime) -> Dic return buckets +def _client_concentration_findings(all_clients: List[Dict[str, Any]], device_names: Dict[str, str]) -> List[Dict[str, Any]]: + total = len(all_clients) + if total < 10: + return [] + counts = _count_by(all_clients, lambda client: _client_uplink_label(client, device_names) or "Unknown") + findings: List[Dict[str, Any]] = [] + for uplink, count in counts.items(): + share = round((count / total) * 100) if total else 0 + if count >= 5 and share >= 50: + findings.append({"uplink": uplink, "count": count, "total": total, "share": share}) + return sorted(findings, key=lambda item: (-int(item["share"]), -int(item["count"]), str(item["uplink"]))) + + def _top_risks( *, all_devices: List[Dict[str, Any]], all_clients: List[Dict[str, Any]], + device_names: Dict[str, str], all_wifi: List[Dict[str, Any]], all_firewall_policies: List[Dict[str, Any]], all_dns_policies: List[Dict[str, Any]], @@ -1076,6 +1090,11 @@ def _top_risks( if telemetry_probes and not any(probe.get("available") for probe in telemetry_probes): risks.append("Port and radio diagnostics are low-confidence - this controller/API path did not expose switch-port or AP-radio telemetry, so PoE draw, RF interference, channel utilization, and port speed cannot be validated from this backup alone.") + client_concentration = _client_concentration_findings(all_clients, device_names) + if client_concentration: + finding = client_concentration[0] + risks.append(f"Client concentration requires validation - {finding['uplink']} has {finding['count']} of {finding['total']} captured clients ({finding['share']}%), which may indicate capacity pressure or a single-device dependency.") + risks.extend(_wifi_security_weak(all_wifi)[:3]) if all_firewall_policies: @@ -1102,6 +1121,8 @@ def _top_risks( def _recommended_priorities( *, all_devices: List[Dict[str, Any]], + all_clients: List[Dict[str, Any]], + device_names: Dict[str, str], all_wifi: List[Dict[str, Any]], telemetry_probes: List[Dict[str, Any]], all_firewall_policies: List[Dict[str, Any]], @@ -1112,6 +1133,10 @@ def _recommended_priorities( priorities.append("Immediate (0-2 weeks): Validate offline UniFi devices against physical inventory, power, uplinks, and controller adoption state.") if telemetry_probes and not any(probe.get("available") for probe in telemetry_probes): priorities.append("Immediate (0-2 weeks): Decide whether deeper diagnostics require Site Manager metrics, UniFi system log/SIEM export, SSH/local controller export, or manual screenshots because the Integration API did not expose port/radio telemetry.") + client_concentration = _client_concentration_findings(all_clients, device_names) + if client_concentration: + finding = client_concentration[0] + priorities.append(f"Short-term (2-6 weeks): Validate client concentration on {finding['uplink']} ({finding['count']} of {finding['total']} clients) with controller UI metrics, physical placement, and uplink capacity before refresh planning.") if _wifi_security_weak(all_wifi): priorities.append("Short-term (2-6 weeks): Review SSID security and migrate appropriate production WLANs toward WPA3, private PSK, or 802.1X instead of shared WPA2 Personal.") if all_firewall_policies and any(not _as_bool(policy.get("loggingEnabled")) for policy in all_firewall_policies): @@ -1206,6 +1231,8 @@ def _client_uplink_analysis_rows(all_clients: List[Dict[str, Any]], device_names def _implementation_plan_rows( *, all_devices: List[Dict[str, Any]], + all_clients: List[Dict[str, Any]], + device_names: Dict[str, str], all_wifi: List[Dict[str, Any]], all_firewall_policies: List[Dict[str, Any]], all_dns_policies: List[Dict[str, Any]], @@ -1217,11 +1244,15 @@ def _implementation_plan_rows( offline = [_device_name(device) for device in all_devices if not _is_online(device)] weak_wifi = _wifi_security_weak(all_wifi) logging_disabled = sum(1 for policy in all_firewall_policies if not _as_bool(policy.get("loggingEnabled"))) + client_concentration = _client_concentration_findings(all_clients, device_names) if offline: rows.append(["Immediate", "0-2 weeks", "Validate offline inventory", f"{', '.join(offline[:6])}", "IT operations"]) if telemetry_probes and not any(probe.get("available") for probe in telemetry_probes): rows.append(["Immediate", "0-2 weeks", "Choose a deeper diagnostics source", "Integration API did not expose port/radio telemetry", "Network engineering"]) + if client_concentration: + finding = client_concentration[0] + rows.append(["Short-term", "2-6 weeks", "Validate concentrated client load", f"{finding['uplink']} has {finding['count']} of {finding['total']} captured clients ({finding['share']}%)", "Network engineering"]) if weak_wifi: rows.append(["Short-term", "2-6 weeks", "Review SSID security posture", "; ".join(weak_wifi[:2]), "Security / network engineering"]) if logging_disabled: @@ -1605,6 +1636,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: top_risks = _top_risks( all_devices=all_devices, all_clients=all_clients, + device_names=device_names, all_wifi=site_payloads["wifi"], all_firewall_policies=site_payloads["firewall_policies"], all_dns_policies=site_payloads["dns_policies"], @@ -1618,6 +1650,8 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: _html_list( _recommended_priorities( all_devices=all_devices, + all_clients=all_clients, + device_names=device_names, all_wifi=site_payloads["wifi"], telemetry_probes=telemetry_probes, all_firewall_policies=site_payloads["firewall_policies"], @@ -1884,6 +1918,8 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: ["Priority", "Window", "Action", "Evidence", "Owner"], _implementation_plan_rows( all_devices=all_devices, + all_clients=all_clients, + device_names=device_names, all_wifi=site_payloads["wifi"], all_firewall_policies=site_payloads["firewall_policies"], all_dns_policies=site_payloads["dns_policies"], From 73f5ce09b629af2d4e44bd7a969e8e93a68aef5c Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 14:16:03 -0400 Subject: [PATCH 44/47] Surface UniFi default access policy risk --- ROADMAP.md | 2 ++ tests/test_unifi_report.py | 4 ++++ unifi/report.py | 33 +++++++++++++++++++++++++++++++++ 3 files changed, 39 insertions(+) diff --git a/ROADMAP.md b/ROADMAP.md index 02ae9ff..e7b662b 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -128,6 +128,8 @@ This project is currently functional as a Python reporting pipeline. The immedia unpriced refresh scope, and summarize refresh/retain/excluded actions.~~ - ~~Promote high client concentration on one AP/switch into UniFi executive risks, priorities, and implementation planning.~~ +- ~~Promote flat DEFAULT client access policy usage into UniFi executive, + security baseline, and implementation planning sections.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index c1a8e75..7434760 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -191,16 +191,20 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Recommended Priorities" in html assert "Client concentration requires validation" in html assert "Validate client concentration" in html + assert "Client access policy appears flat" in html + assert "Review UniFi client access policy design" in html assert "Data Confidence Snapshot" in html assert "Health at a Glance" in html assert "How to Use This Report" in html assert "Security Baseline" in html + assert "Client access policy" in html assert "Port and radio diagnostics are low-confidence" in html assert "Client Analysis" in html assert "Client Overview Summary" in html assert "Client Concentration by Uplink" in html assert "Recommendations & Implementation Plan" in html assert "Validate concentrated client load" in html + assert "Review client access policy segmentation" in html assert "Choose a deeper diagnostics source" in html assert "Hardware Refresh & Budget Planning" in html assert "Refresh Action Summary" in html diff --git a/unifi/report.py b/unifi/report.py index 884e8ca..614bb59 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -1070,6 +1070,17 @@ def _client_concentration_findings(all_clients: List[Dict[str, Any]], device_nam return sorted(findings, key=lambda item: (-int(item["share"]), -int(item["count"]), str(item["uplink"]))) +def _default_client_access_finding(all_clients: List[Dict[str, Any]]) -> Dict[str, Any] | None: + total = len(all_clients) + if total < 10: + return None + default_count = sum(1 for client in all_clients if _access_label(client).strip().upper() == "DEFAULT") + share = round((default_count / total) * 100) if total else 0 + if default_count >= 10 and share >= 80: + return {"default": default_count, "total": total, "share": share} + return None + + def _top_risks( *, all_devices: List[Dict[str, Any]], @@ -1095,6 +1106,10 @@ def _top_risks( finding = client_concentration[0] risks.append(f"Client concentration requires validation - {finding['uplink']} has {finding['count']} of {finding['total']} captured clients ({finding['share']}%), which may indicate capacity pressure or a single-device dependency.") + default_access = _default_client_access_finding(all_clients) + if default_access: + risks.append(f"Client access policy appears flat - {default_access['default']} of {default_access['total']} captured clients ({default_access['share']}%) use DEFAULT access; validate guest, IoT, staff, and trusted-device separation.") + risks.extend(_wifi_security_weak(all_wifi)[:3]) if all_firewall_policies: @@ -1137,6 +1152,9 @@ def _recommended_priorities( if client_concentration: finding = client_concentration[0] priorities.append(f"Short-term (2-6 weeks): Validate client concentration on {finding['uplink']} ({finding['count']} of {finding['total']} clients) with controller UI metrics, physical placement, and uplink capacity before refresh planning.") + default_access = _default_client_access_finding(all_clients) + if default_access: + priorities.append(f"Short-term (2-6 weeks): Review UniFi client access policy design because {default_access['default']} of {default_access['total']} captured clients use DEFAULT access.") if _wifi_security_weak(all_wifi): priorities.append("Short-term (2-6 weeks): Review SSID security and migrate appropriate production WLANs toward WPA3, private PSK, or 802.1X instead of shared WPA2 Personal.") if all_firewall_policies and any(not _as_bool(policy.get("loggingEnabled")) for policy in all_firewall_policies): @@ -1169,6 +1187,7 @@ def _data_confidence_rows( def _security_baseline_rows( *, + all_clients: List[Dict[str, Any]], all_wifi: List[Dict[str, Any]], all_firewall_policies: List[Dict[str, Any]], all_dns_policies: List[Dict[str, Any]], @@ -1178,6 +1197,7 @@ def _security_baseline_rows( weak_wifi = _wifi_security_weak(all_wifi) logging_enabled = sum(1 for policy in all_firewall_policies if _as_bool(policy.get("loggingEnabled"))) broad_allow = _broad_allow_policy_summary(all_firewall_policies) + default_access = _default_client_access_finding(all_clients) if broad_allow.get("user", 0): broad_allow_status = "Review" elif broad_allow.get("total", 0): @@ -1186,6 +1206,15 @@ def _security_baseline_rows( broad_allow_status = "Not detected" return [ ["Network segmentation", "Review" if network_count <= 2 else "Present", f"{_plural(network_count, 'network/VLAN definition')} captured."], + [ + "Client access policy", + "Review" if default_access else ("Present" if all_clients else "Not captured"), + ( + f"{default_access['default']} of {default_access['total']} captured clients ({default_access['share']}%) use DEFAULT access; validate guest, IoT, staff, and trusted-device separation." + if default_access + else (f"Access policy mix: {_fmt_counts(_count_by(all_clients, _access_label))}." if all_clients else "No client access records captured.") + ), + ], ["Wireless authentication", "Review" if weak_wifi else ("Present" if all_wifi else "Missing"), "; ".join(weak_wifi[:2]) if weak_wifi else f"{_plural(len(all_wifi), 'SSID')} captured."], ["Firewall rules", "Present" if all_firewall_policies else "Missing", f"{_plural(len(all_firewall_policies), 'policy', 'policies')} captured."], ["Firewall logging", "Review" if all_firewall_policies and logging_enabled < len(all_firewall_policies) else "Present", f"{logging_enabled} of {len(all_firewall_policies)} policies have logging enabled."], @@ -1245,6 +1274,7 @@ def _implementation_plan_rows( weak_wifi = _wifi_security_weak(all_wifi) logging_disabled = sum(1 for policy in all_firewall_policies if not _as_bool(policy.get("loggingEnabled"))) client_concentration = _client_concentration_findings(all_clients, device_names) + default_access = _default_client_access_finding(all_clients) if offline: rows.append(["Immediate", "0-2 weeks", "Validate offline inventory", f"{', '.join(offline[:6])}", "IT operations"]) @@ -1253,6 +1283,8 @@ def _implementation_plan_rows( if client_concentration: finding = client_concentration[0] rows.append(["Short-term", "2-6 weeks", "Validate concentrated client load", f"{finding['uplink']} has {finding['count']} of {finding['total']} captured clients ({finding['share']}%)", "Network engineering"]) + if default_access: + rows.append(["Short-term", "2-6 weeks", "Review client access policy segmentation", f"{default_access['default']} of {default_access['total']} captured clients use DEFAULT access", "Security / network engineering"]) if weak_wifi: rows.append(["Short-term", "2-6 weeks", "Review SSID security posture", "; ".join(weak_wifi[:2]), "Security / network engineering"]) if logging_disabled: @@ -1900,6 +1932,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: _table( ["Control Area", "Status", "Evidence / Interpretation"], _security_baseline_rows( + all_clients=all_clients, all_wifi=site_payloads["wifi"], all_firewall_policies=site_payloads["firewall_policies"], all_dns_policies=site_payloads["dns_policies"], From 56a86650dc5949718aa649a4bddadbe20521ba91 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 14:20:48 -0400 Subject: [PATCH 45/47] Surface UniFi network address detail gaps --- ROADMAP.md | 2 ++ tests/test_unifi_report.py | 15 +++++++- unifi/report.py | 74 +++++++++++++++++++++++++++++++++++++- 3 files changed, 89 insertions(+), 2 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index e7b662b..cdcf66f 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -130,6 +130,8 @@ This project is currently functional as a Python reporting pipeline. The immedia risks, priorities, and implementation planning.~~ - ~~Promote flat DEFAULT client access policy usage into UniFi executive, security baseline, and implementation planning sections.~~ +- ~~Promote missing UniFi subnet/gateway/DHCP fields into executive, + confidence, security baseline, and implementation planning sections.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 7434760..0b82bf4 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -3,7 +3,7 @@ from unifi.client import UniFiRequestError from unifi.collect import _call_list, _collect_telemetry_probes -from unifi.report import build_report +from unifi.report import build_report, _network_detail_finding from unifi.profiles import discover_site_profiles @@ -290,6 +290,19 @@ def test_unifi_profiles_discovers_numbered_site_profiles(monkeypatch): assert profiles[1].env_updates()["UNIFI_NETWORK_BASE_URL"] == "https://10.0.0.1" +def test_unifi_report_network_detail_finding_flags_missing_address_fields(): + finding = _network_detail_finding( + [ + {"name": "Default", "vlanId": 1}, + {"name": "Guest", "vlanId": 100}, + ] + ) + + assert finding is not None + assert finding["status"] == "Low detail" + assert "none expose subnet, gateway, DHCP mode, or DHCP range" in finding["summary"] + + def test_unifi_report_surfaces_remote_connector_auth_guidance(tmp_path: Path): source = tmp_path / "backup" source.mkdir() diff --git a/unifi/report.py b/unifi/report.py index 614bb59..6d88da2 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -805,6 +805,48 @@ def _network_rows(networks: List[Dict[str, Any]], zone_names: Dict[str, str]) -> return rows +def _network_detail_counts(networks: List[Dict[str, Any]]) -> Dict[str, int]: + return { + "total": len(networks), + "subnet": sum(1 for network in networks if _network_subnet(network)), + "gateway": sum(1 for network in networks if _network_gateway(network)), + "dhcp_mode": sum(1 for network in networks if _network_dhcp_mode(network)), + "dhcp_range": sum(1 for network in networks if _network_dhcp_range(network)), + } + + +def _network_detail_finding(networks: List[Dict[str, Any]]) -> Dict[str, Any] | None: + counts = _network_detail_counts(networks) + total = counts["total"] + if not total: + return None + if not any(counts[key] for key in ("subnet", "gateway", "dhcp_mode", "dhcp_range")): + return { + "status": "Low detail", + "summary": f"{_plural(total, 'network/VLAN definition')} captured, but none expose subnet, gateway, DHCP mode, or DHCP range fields from this API path.", + "counts": counts, + } + if counts["subnet"] < total or counts["gateway"] < total or (counts["dhcp_mode"] == 0 and counts["dhcp_range"] == 0): + return { + "status": "Partial", + "summary": f"{_plural(total, 'network/VLAN definition')} captured; detail coverage is subnet {counts['subnet']}/{total}, gateway {counts['gateway']}/{total}, DHCP mode {counts['dhcp_mode']}/{total}, DHCP range {counts['dhcp_range']}/{total}.", + "counts": counts, + } + return None + + +def _network_detail_rows(networks: List[Dict[str, Any]]) -> List[List[Any]]: + counts = _network_detail_counts(networks) + total = counts["total"] + return [ + ["Networks / VLANs captured", total, "Configured network objects returned by the Network Integration API."], + ["Subnet fields", f"{counts['subnet']} / {total}", "Needed to match configured VLANs to observed client address space."], + ["Gateway fields", f"{counts['gateway']} / {total}", "Needed for router/SVI and disaster-recovery documentation."], + ["DHCP mode fields", f"{counts['dhcp_mode']} / {total}", "Needed to identify server, relay, or externally managed DHCP behavior."], + ["DHCP range fields", f"{counts['dhcp_range']} / {total}", "Needed for authoritative lease-scope and migration planning."], + ] + + def _client_ip(client: Dict[str, Any]) -> ipaddress.IPv4Address | ipaddress.IPv6Address | None: raw = _first(client, ("ipAddress", "ip")) if not raw: @@ -1086,6 +1128,7 @@ def _top_risks( all_devices: List[Dict[str, Any]], all_clients: List[Dict[str, Any]], device_names: Dict[str, str], + all_networks: List[Dict[str, Any]], all_wifi: List[Dict[str, Any]], all_firewall_policies: List[Dict[str, Any]], all_dns_policies: List[Dict[str, Any]], @@ -1110,6 +1153,10 @@ def _top_risks( if default_access: risks.append(f"Client access policy appears flat - {default_access['default']} of {default_access['total']} captured clients ({default_access['share']}%) use DEFAULT access; validate guest, IoT, staff, and trusted-device separation.") + network_detail = _network_detail_finding(all_networks) + if network_detail: + risks.append(f"Network/DHCP backup detail is incomplete - {network_detail['summary']} Capture controller UI export or screenshots before using this backup as the authoritative address plan.") + risks.extend(_wifi_security_weak(all_wifi)[:3]) if all_firewall_policies: @@ -1138,6 +1185,7 @@ def _recommended_priorities( all_devices: List[Dict[str, Any]], all_clients: List[Dict[str, Any]], device_names: Dict[str, str], + all_networks: List[Dict[str, Any]], all_wifi: List[Dict[str, Any]], telemetry_probes: List[Dict[str, Any]], all_firewall_policies: List[Dict[str, Any]], @@ -1155,6 +1203,9 @@ def _recommended_priorities( default_access = _default_client_access_finding(all_clients) if default_access: priorities.append(f"Short-term (2-6 weeks): Review UniFi client access policy design because {default_access['default']} of {default_access['total']} captured clients use DEFAULT access.") + network_detail = _network_detail_finding(all_networks) + if network_detail: + priorities.append("Short-term (2-6 weeks): Complete VLAN, subnet, gateway, and DHCP-scope documentation from the UniFi controller UI/export because the API backup did not expose full address-plan fields.") if _wifi_security_weak(all_wifi): priorities.append("Short-term (2-6 weeks): Review SSID security and migrate appropriate production WLANs toward WPA3, private PSK, or 802.1X instead of shared WPA2 Personal.") if all_firewall_policies and any(not _as_bool(policy.get("loggingEnabled")) for policy in all_firewall_policies): @@ -1169,16 +1220,18 @@ def _data_confidence_rows( *, all_devices: List[Dict[str, Any]], all_clients: List[Dict[str, Any]], + all_networks: List[Dict[str, Any]], network_count: int, firewall_policy_count: int, telemetry_probes: List[Dict[str, Any]], all_wans: List[Dict[str, Any]], ) -> List[List[Any]]: telemetry_available = sum(1 for probe in telemetry_probes if probe.get("available")) + network_detail = _network_detail_finding(all_networks) return [ ["Inventory and device status", "High" if all_devices else "Low", f"{_plural(len(all_devices), 'device record')} captured with controller state."], ["Client attachment detail", "High" if all_clients else "Low", f"{_plural(len(all_clients), 'client record')} captured with uplink mapping where present."], - ["VLAN/network definitions", "Medium" if network_count else "Low", f"{_plural(network_count, 'network/VLAN definition')} captured; subnet/DHCP detail depends on API fields exposed by this controller."], + ["VLAN/network definitions", "Low" if network_detail else ("Medium" if network_count else "Low"), network_detail["summary"] if network_detail else f"{_plural(network_count, 'network/VLAN definition')} captured with subnet/DHCP fields where exposed."], ["Firewall policy backup", "High" if firewall_policy_count else "Low", f"{_plural(firewall_policy_count, 'policy', 'policies')} captured."], ["WAN detail", "Low" if all_wans else "Not captured", f"{_plural(len(all_wans), 'WAN record')} captured; current endpoint only exposed labels in this run."], ["Port and radio telemetry", "Low" if telemetry_available == 0 else "Medium", _telemetry_gap_summary(telemetry_probes)], @@ -1188,6 +1241,7 @@ def _data_confidence_rows( def _security_baseline_rows( *, all_clients: List[Dict[str, Any]], + all_networks: List[Dict[str, Any]], all_wifi: List[Dict[str, Any]], all_firewall_policies: List[Dict[str, Any]], all_dns_policies: List[Dict[str, Any]], @@ -1198,6 +1252,7 @@ def _security_baseline_rows( logging_enabled = sum(1 for policy in all_firewall_policies if _as_bool(policy.get("loggingEnabled"))) broad_allow = _broad_allow_policy_summary(all_firewall_policies) default_access = _default_client_access_finding(all_clients) + network_detail = _network_detail_finding(all_networks) if broad_allow.get("user", 0): broad_allow_status = "Review" elif broad_allow.get("total", 0): @@ -1206,6 +1261,11 @@ def _security_baseline_rows( broad_allow_status = "Not detected" return [ ["Network segmentation", "Review" if network_count <= 2 else "Present", f"{_plural(network_count, 'network/VLAN definition')} captured."], + [ + "Subnet / DHCP backup", + "Review" if network_detail else ("Present" if network_count else "Not captured"), + f"{network_detail['summary']} Treat observed client addresses as planning evidence until authoritative DHCP scopes are exported." if network_detail else f"{_plural(network_count, 'network/VLAN definition')} captured with address-plan fields where exposed.", + ], [ "Client access policy", "Review" if default_access else ("Present" if all_clients else "Not captured"), @@ -1262,6 +1322,7 @@ def _implementation_plan_rows( all_devices: List[Dict[str, Any]], all_clients: List[Dict[str, Any]], device_names: Dict[str, str], + all_networks: List[Dict[str, Any]], all_wifi: List[Dict[str, Any]], all_firewall_policies: List[Dict[str, Any]], all_dns_policies: List[Dict[str, Any]], @@ -1275,6 +1336,7 @@ def _implementation_plan_rows( logging_disabled = sum(1 for policy in all_firewall_policies if not _as_bool(policy.get("loggingEnabled"))) client_concentration = _client_concentration_findings(all_clients, device_names) default_access = _default_client_access_finding(all_clients) + network_detail = _network_detail_finding(all_networks) if offline: rows.append(["Immediate", "0-2 weeks", "Validate offline inventory", f"{', '.join(offline[:6])}", "IT operations"]) @@ -1285,6 +1347,8 @@ def _implementation_plan_rows( rows.append(["Short-term", "2-6 weeks", "Validate concentrated client load", f"{finding['uplink']} has {finding['count']} of {finding['total']} captured clients ({finding['share']}%)", "Network engineering"]) if default_access: rows.append(["Short-term", "2-6 weeks", "Review client access policy segmentation", f"{default_access['default']} of {default_access['total']} captured clients use DEFAULT access", "Security / network engineering"]) + if network_detail: + rows.append(["Short-term", "2-6 weeks", "Complete VLAN/DHCP documentation", network_detail["summary"], "Network engineering"]) if weak_wifi: rows.append(["Short-term", "2-6 weeks", "Review SSID security posture", "; ".join(weak_wifi[:2]), "Security / network engineering"]) if logging_disabled: @@ -1669,6 +1733,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: all_devices=all_devices, all_clients=all_clients, device_names=device_names, + all_networks=site_payloads["networks"], all_wifi=site_payloads["wifi"], all_firewall_policies=site_payloads["firewall_policies"], all_dns_policies=site_payloads["dns_policies"], @@ -1684,6 +1749,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: all_devices=all_devices, all_clients=all_clients, device_names=device_names, + all_networks=site_payloads["networks"], all_wifi=site_payloads["wifi"], telemetry_probes=telemetry_probes, all_firewall_policies=site_payloads["firewall_policies"], @@ -1747,6 +1813,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: _data_confidence_rows( all_devices=all_devices, all_clients=all_clients, + all_networks=site_payloads["networks"], network_count=network_count, firewall_policy_count=firewall_policy_count, telemetry_probes=telemetry_probes, @@ -1878,6 +1945,9 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: clients = _read_site_file(source, site, "clients") zones = _read_site_file(source, site, "firewall_zones") zone_names = {str(zone.get("id")): str(zone.get("name") or zone.get("id")) for zone in zones if zone.get("id")} + if _network_detail_finding(networks): + sections.append("

    Network Address Detail Coverage

    ") + sections.append(_table(["Area", "Coverage", "Planning Use"], _network_detail_rows(networks))) sections.append("

    Configured Networks / VLANs

    ") sections.append(_table(["Network", "VLAN", "Enabled", "Flags", "Subnet", "Gateway", "DHCP", "DHCP Range", "DNS", "Zone", "Origin"], _network_rows(networks, zone_names), "No network/VLAN endpoint data captured for this site.")) sections.append("

    Observed Client Address Space

    ") @@ -1933,6 +2003,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: ["Control Area", "Status", "Evidence / Interpretation"], _security_baseline_rows( all_clients=all_clients, + all_networks=site_payloads["networks"], all_wifi=site_payloads["wifi"], all_firewall_policies=site_payloads["firewall_policies"], all_dns_policies=site_payloads["dns_policies"], @@ -1953,6 +2024,7 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: all_devices=all_devices, all_clients=all_clients, device_names=device_names, + all_networks=site_payloads["networks"], all_wifi=site_payloads["wifi"], all_firewall_policies=site_payloads["firewall_policies"], all_dns_policies=site_payloads["dns_policies"], From 0c393c91d95aeb109a95d45e2e6270f962cec731 Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Wed, 6 May 2026 14:26:12 -0400 Subject: [PATCH 46/47] Add UniFi backup completion action plan --- ROADMAP.md | 2 + tests/test_unifi_report.py | 31 ++++- unifi/report.py | 253 +++++++++++++++++++++++++++++++++++-- 3 files changed, 272 insertions(+), 14 deletions(-) diff --git a/ROADMAP.md b/ROADMAP.md index cdcf66f..101ab6a 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -132,6 +132,8 @@ This project is currently functional as a Python reporting pipeline. The immedia security baseline, and implementation planning sections.~~ - ~~Promote missing UniFi subnet/gateway/DHCP fields into executive, confidence, security baseline, and implementation planning sections.~~ +- ~~Add a UniFi backup completion action plan that ranks missing telemetry, + address-plan, WAN, DNS, firewall, and optional endpoint evidence.~~ - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. diff --git a/tests/test_unifi_report.py b/tests/test_unifi_report.py index 0b82bf4..405f0b5 100644 --- a/tests/test_unifi_report.py +++ b/tests/test_unifi_report.py @@ -3,7 +3,7 @@ from unifi.client import UniFiRequestError from unifi.collect import _call_list, _collect_telemetry_probes -from unifi.report import build_report, _network_detail_finding +from unifi.report import build_report, _backup_completion_action_rows, _network_detail_finding from unifi.profiles import discover_site_profiles @@ -189,6 +189,13 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "Current State Assessment" in html assert "Top Operational Risks" in html assert "Recommended Priorities" in html + assert "Backup Completion Action Plan" in html + assert "Switch-port and AP-radio telemetry" in html + assert "Manual export needed" in html + assert "Address plan and DHCP scopes" in html + assert "WAN/provider details" in html + assert "DNS/security filtering owner" in html + assert "Optional controller endpoints" in html assert "Client concentration requires validation" in html assert "Validate client concentration" in html assert "Client access policy appears flat" in html @@ -261,11 +268,13 @@ def test_unifi_report_renders_inventory_and_network_sections(tmp_path: Path): assert "not exposed (HTTP 404)" in html assert "UniFi Executive Summary" in exec_html assert "Top Operational Risks" in exec_html + assert "Backup Completion Action Plan" in exec_html assert "Recommendations & Implementation Plan" in exec_html assert "Hardware Refresh & Budget Planning" in exec_html assert "Firewall and Policy Backup" not in exec_html assert "UniFi Backup Settings Report" in backup_html assert "Configuration Backup Completeness" in backup_html + assert "Backup Completion Action Plan" in backup_html assert "Telemetry Recovery Plan" in backup_html assert "Firewall and Policy Backup" in backup_html assert "Network Services Backup" in backup_html @@ -303,6 +312,26 @@ def test_unifi_report_network_detail_finding_flags_missing_address_fields(): assert "none expose subnet, gateway, DHCP mode, or DHCP range" in finding["summary"] +def test_unifi_report_backup_completion_action_rows_rank_missing_evidence(): + rows = _backup_completion_action_rows( + all_networks=[{"name": "Default", "vlanId": 1}], + all_wans=[{"name": "WAN 1"}], + all_dns_policies=[], + all_firewall_policies=[], + telemetry_probes=[{"label": "site_ports", "available": False, "status": 404}], + errors=[], + unsupported=[{"label": "Default:vpn_tunnels", "status": 404}], + ) + + areas = {row[1]: row for row in rows} + assert areas["Switch-port and AP-radio telemetry"][2] == "Manual export needed" + assert areas["Address plan and DHCP scopes"][2] == "Low detail" + assert areas["WAN/provider details"][2] == "Low detail" + assert areas["Firewall policy backup"][2] == "Missing" + assert areas["DNS/security filtering owner"][2] == "Confirm owner" + assert areas["Optional controller endpoints"][2] == "Documented gaps" + + def test_unifi_report_surfaces_remote_connector_auth_guidance(tmp_path: Path): source = tmp_path / "backup" source.mkdir() diff --git a/unifi/report.py b/unifi/report.py index 6d88da2..1d81d3a 100644 --- a/unifi/report.py +++ b/unifi/report.py @@ -599,28 +599,68 @@ def _service_endpoint_state(items: List[Dict[str, Any]]) -> str: def _wan_rows(wans: List[Dict[str, Any]]) -> List[List[Any]]: rows: List[List[Any]] = [] for wan in wans[:100]: - ip_gateway = " / ".join( - value - for value in ( - _first(wan, ("ipAddress", "ip", "address")), - _first(wan, ("gateway", "gatewayIp", "gatewayAddress")), - ) - if value - ) rows.append( [ _first(wan, ("name", "displayName", "id")), _first(wan, ("enabled", "state", "status"), "captured"), _first(wan, ("type", "wanType", "purpose")), - _first(wan, ("addressingType", "connectionType", "ipv4ConnectionType", "mode")), - ip_gateway, - _compact_value(wan.get("dnsServers") or wan.get("dns") or wan.get("nameservers")), + _wan_addressing(wan), + _wan_ip_gateway(wan), + _wan_dns(wan), _first(wan, ("id", "_id")), ] ) return rows +def _wan_addressing(wan: Dict[str, Any]) -> str: + return _first(wan, ("addressingType", "connectionType", "ipv4ConnectionType", "mode")) + + +def _wan_ip_gateway(wan: Dict[str, Any]) -> str: + return " / ".join( + value + for value in ( + _first(wan, ("ipAddress", "ip", "address")), + _first(wan, ("gateway", "gatewayIp", "gatewayAddress")), + ) + if value + ) + + +def _wan_dns(wan: Dict[str, Any]) -> str: + return _compact_value(wan.get("dnsServers") or wan.get("dns") or wan.get("nameservers")) + + +def _wan_detail_counts(wans: List[Dict[str, Any]]) -> Dict[str, int]: + return { + "total": len(wans), + "addressing": sum(1 for wan in wans if _wan_addressing(wan)), + "ip_gateway": sum(1 for wan in wans if _wan_ip_gateway(wan)), + "dns": sum(1 for wan in wans if _wan_dns(wan)), + } + + +def _wan_detail_finding(wans: List[Dict[str, Any]]) -> Dict[str, Any] | None: + counts = _wan_detail_counts(wans) + total = counts["total"] + if not total: + return None + if not any(counts[key] for key in ("addressing", "ip_gateway", "dns")): + return { + "status": "Low detail", + "summary": f"{_plural(total, 'WAN record')} captured, but no addressing, IP/gateway, or DNS fields were exposed.", + "counts": counts, + } + if counts["addressing"] < total or counts["ip_gateway"] < total: + return { + "status": "Partial", + "summary": f"{_plural(total, 'WAN record')} captured; detail coverage is addressing {counts['addressing']}/{total}, IP/gateway {counts['ip_gateway']}/{total}, DNS {counts['dns']}/{total}.", + "counts": counts, + } + return None + + def _vpn_rows(items: List[Dict[str, Any]]) -> List[List[Any]]: rows: List[List[Any]] = [] for item in items[:100]: @@ -1060,6 +1100,172 @@ def _telemetry_recovery_rows(telemetry_probes: List[Dict[str, Any]], net: Dict[s ] +def _backup_completion_action_rows( + *, + all_networks: List[Dict[str, Any]], + all_wans: List[Dict[str, Any]], + all_dns_policies: List[Dict[str, Any]], + all_firewall_policies: List[Dict[str, Any]], + telemetry_probes: List[Dict[str, Any]], + errors: List[Dict[str, Any]], + unsupported: List[Dict[str, Any]], +) -> List[List[Any]]: + rows: List[List[Any]] = [] + + if errors: + rows.append( + [ + "1", + "Collection errors", + "Fix first", + f"{_plural(len(errors), 'hard endpoint error')} captured.", + "Resolve credential, permission, or reachability errors and rerun before treating the backup as final.", + ] + ) + + telemetry_total = len(telemetry_probes) + telemetry_available = sum(1 for probe in telemetry_probes if probe.get("available")) + if telemetry_total and telemetry_available == 0: + rows.append( + [ + "1", + "Switch-port and AP-radio telemetry", + "Manual export needed", + f"0 of {telemetry_total} detailed telemetry probe endpoint(s) returned data.", + "Export UniFi support data or controller screenshots for switch ports, PoE draw, AP channel, AP power, RF utilization, and channel utilization.", + ] + ) + elif telemetry_total and telemetry_available < telemetry_total: + rows.append( + [ + "2", + "Switch-port and AP-radio telemetry", + "Partial", + f"{telemetry_available} of {telemetry_total} detailed telemetry probe endpoint(s) returned data.", + "Use captured telemetry where present and manually fill missing port, PoE, channel, and RF fields before final planning.", + ] + ) + elif not telemetry_total: + rows.append( + [ + "2", + "Switch-port and AP-radio telemetry", + "Not captured", + "No detailed telemetry probes were saved in this backup.", + "Run the current UniFi pipeline with local Network Application access, then export screenshots if the controller still does not expose telemetry endpoints.", + ] + ) + + network_detail = _network_detail_finding(all_networks) + if network_detail: + rows.append( + [ + "1", + "Address plan and DHCP scopes", + network_detail["status"], + network_detail["summary"], + "Export controller UI network/VLAN settings or screenshots for subnet, gateway, DHCP mode, DHCP ranges, DNS servers, and relay/server ownership.", + ] + ) + elif all_networks: + rows.append( + [ + "3", + "Address plan and DHCP scopes", + "Captured", + f"{_plural(len(all_networks), 'network/VLAN definition')} include address-plan fields exposed by the API.", + "Use this table as a planning input and still validate against the live controller before migration.", + ] + ) + else: + rows.append( + [ + "1", + "Address plan and DHCP scopes", + "Missing", + "No network/VLAN endpoint data was captured.", + "Validate API permissions and export controller UI network settings before disaster-recovery or migration use.", + ] + ) + + wan_detail = _wan_detail_finding(all_wans) + if wan_detail: + rows.append( + [ + "2", + "WAN/provider details", + wan_detail["status"], + wan_detail["summary"], + "Record ISP circuit labels, handoff ports, addressing mode, static IPs, gateways, DNS, failover order, and provider contacts from the controller UI or install notes.", + ] + ) + elif all_wans: + rows.append( + [ + "3", + "WAN/provider details", + "Captured", + f"{_plural(len(all_wans), 'WAN record')} include addressing and gateway fields exposed by the API.", + "Validate circuit labels and provider contacts outside the API backup.", + ] + ) + else: + rows.append( + [ + "2", + "WAN/provider details", + "Missing", + "No WAN endpoint data was captured.", + "Export WAN settings from the controller UI and document provider handoff details.", + ] + ) + + if all_firewall_policies: + rows.append( + [ + "3", + "Firewall policy backup", + "Captured", + f"{_plural(len(all_firewall_policies), 'firewall policy', 'firewall policies')} captured.", + "Review policy intent and logging, then archive this JSON alongside any controller support export.", + ] + ) + else: + rows.append( + [ + "1", + "Firewall policy backup", + "Missing", + "No firewall policies were captured.", + "Validate policy endpoint permissions and export screenshots before treating this as a security backup.", + ] + ) + + if not all_dns_policies: + rows.append( + [ + "2", + "DNS/security filtering owner", + "Confirm owner", + "No UniFi DNS policies were captured.", + "Document whether filtering lives in UniFi, upstream DNS, firewall content filtering, endpoint security, or another security stack.", + ] + ) + + if unsupported: + rows.append( + [ + "3", + "Optional controller endpoints", + "Documented gaps", + f"{_plural(len(unsupported), 'optional endpoint coverage note')} captured.", + "Keep these notes with the report; they explain controller/API limits rather than failed mandatory collection.", + ] + ) + + return sorted(rows, key=lambda row: (int(row[0]), str(row[1]))) + + def _wifi_security_weak(wifi: Iterable[Dict[str, Any]]) -> List[str]: weak: List[str] = [] for wlan in wifi: @@ -1228,12 +1434,13 @@ def _data_confidence_rows( ) -> List[List[Any]]: telemetry_available = sum(1 for probe in telemetry_probes if probe.get("available")) network_detail = _network_detail_finding(all_networks) + wan_detail = _wan_detail_finding(all_wans) return [ ["Inventory and device status", "High" if all_devices else "Low", f"{_plural(len(all_devices), 'device record')} captured with controller state."], ["Client attachment detail", "High" if all_clients else "Low", f"{_plural(len(all_clients), 'client record')} captured with uplink mapping where present."], ["VLAN/network definitions", "Low" if network_detail else ("Medium" if network_count else "Low"), network_detail["summary"] if network_detail else f"{_plural(network_count, 'network/VLAN definition')} captured with subnet/DHCP fields where exposed."], ["Firewall policy backup", "High" if firewall_policy_count else "Low", f"{_plural(firewall_policy_count, 'policy', 'policies')} captured."], - ["WAN detail", "Low" if all_wans else "Not captured", f"{_plural(len(all_wans), 'WAN record')} captured; current endpoint only exposed labels in this run."], + ["WAN detail", "Low" if wan_detail else ("Medium" if all_wans else "Not captured"), wan_detail["summary"] if wan_detail else f"{_plural(len(all_wans), 'WAN record')} captured with addressing fields where exposed."], ["Port and radio telemetry", "Low" if telemetry_available == 0 else "Medium", _telemetry_gap_summary(telemetry_probes)], ] @@ -1707,6 +1914,15 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: client_age = _client_age_buckets(all_clients, collected_at) pricing = _pricing_payload() hardware_rows, hardware_totals = _hardware_refresh_rows(all_devices, legacy_aps, pricing) + backup_completion_rows = _backup_completion_action_rows( + all_networks=site_payloads["networks"], + all_wans=site_payloads["wans"], + all_dns_policies=site_payloads["dns_policies"], + all_firewall_policies=site_payloads["firewall_policies"], + telemetry_probes=telemetry_probes, + errors=errors, + unsupported=unsupported, + ) cards = [ ("Sites", len(site_summaries) or len(sm_sites)), ("Devices", len(all_devices)), @@ -1835,6 +2051,15 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append(_health_cards(health_cards)) sections.append("
    ") + sections.append("

    1A. Backup Completion Action Plan

    ") + sections.append( + _table( + ["Priority", "Backup Area", "Status", "Evidence", "Recommended Completion Step"], + backup_completion_rows, + ) + ) + sections.append("
    ") + sections.append("

    Guide. How to Use This Report

    ") sections.append( _table( @@ -2126,10 +2351,11 @@ def build_report(source_dir: str, output_dir: str) -> Dict[str, str]: sections.append("
    ") complete_body = "\n".join(sections) - exec_body = _select_sections(complete_body, ("1. Executive Summary", "Guide. How to Use This Report", "9. Recommendations & Implementation Plan", "10. Hardware Refresh & Budget Planning")) + exec_body = _select_sections(complete_body, ("1. Executive Summary", "1A. Backup Completion Action Plan", "Guide. How to Use This Report", "9. Recommendations & Implementation Plan", "10. Hardware Refresh & Budget Planning")) backup_body = _select_sections( complete_body, ( + "1A. Backup Completion Action Plan", "2. Collection Coverage", "4. Configuration Backup Completeness", "6. Sites, Networks, VLANs, and DHCP", @@ -2203,6 +2429,7 @@ def _html_shell( collected = metadata.get("collectedAt") or "not captured" toc_items = toc_items or [ ("1", "Executive Summary", "1-executive-summary"), + ("1A", "Backup Completion Action Plan", "1a-backup-completion-action-plan"), ("Guide", "How to Use This Report", "guide-how-to-use-this-report"), ("2", "Collection Coverage", "2-collection-coverage"), ("3", "Network Overview", "3-network-overview"), From 1cbbafcf8c330f76d64bfd1051e14b6f05652e1b Mon Sep 17 00:00:00 2001 From: "techmore.co" Date: Mon, 11 May 2026 22:55:34 -0400 Subject: [PATCH 47/47] Add codebase audit roadmap goal --- ROADMAP.md | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/ROADMAP.md b/ROADMAP.md index 101ab6a..ab69a62 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -137,6 +137,30 @@ This project is currently functional as a Python reporting pipeline. The immedia - Add deeper UniFi switch/AP port and radio telemetry when the controller API exposes it. +## Phase 7: Codebase Audit and Lean Enhancements - Planned + +Goal: audit the working Meraki and UniFi reporting codebase for improvements +that reduce maintenance burden, improve report reliability, and make future +enhancements safer without disrupting the default `./run.sh` and +`./unifi/run.sh` workflows. + +- Map the current pipeline modules, generated artifacts, raw backup locations, + and test coverage so cleanup work does not regress report generation. +- Review `run.sh`, `unifi/run.sh`, `reporting/`, `unifi/`, reference JSON, and + tests for duplicated logic, overly large functions, weak boundaries, stale + compatibility paths, and low-risk extraction opportunities. +- Identify report-generation quality risks, especially PDF layout pressure, + overly wide tables, brittle HTML string assembly, missing fixture coverage, + and places where unavailable API fields could be mistaken for network issues. +- Audit API collection and backup handling for clear separation between + customer-specific data, generated reports, reusable references, and source + code. +- Produce a prioritized audit summary with `do now`, `defer`, and `do not + change` categories before broad refactors. +- Implement only surgical cleanup after the audit: small extractions, stronger + tests, clearer names, dead-code removal, and documentation updates that keep + Meraki and UniFi report output behavior stable. + ## Release Checklist - Run `./install.sh`.