chore(agent-data-plane): initialize TLS early on before spawning supervisor#1177
chore(agent-data-plane): initialize TLS early on before spawning supervisor#1177tobz wants to merge 1 commit intotobz/supervisor-health-registry-workerfrom
Conversation
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
There was a problem hiding this comment.
Pull request overview
Exposes Rustls ServerConfig from the crate’s net module, likely to allow TLS configuration types to be referenced earlier/higher in the agent data-plane initialization flow.
Changes:
- Re-exported
rustls::ServerConfigfromlib/saluki-io/src/net/mod.rs.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| pub mod util; | ||
|
|
||
| mod ipc; | ||
| pub use rustls::ServerConfig; |
There was a problem hiding this comment.
Re-exporting a third-party type (rustls::ServerConfig) from your public API couples this crate’s semver stability to Rustls’ public API and can make future Rustls upgrades breaking for downstream users. If this is only needed internally, prefer pub(crate) use rustls::ServerConfig;. If it must be public, consider introducing a crate-owned wrapper/type alias in a dedicated TLS module (e.g., net::tls) to better control your public surface area.
| pub use rustls::ServerConfig; | |
| pub(crate) use rustls::ServerConfig; |
Binary Size Analysis (Agent Data Plane)Target: d3ab905 (baseline) vs 642f2f6 (comparison) diff
|
| Module | File Size | Symbols |
|---|---|---|
saluki_core::runtime::supervisor |
+68.89 KiB | 66 |
core |
+44.99 KiB | 247 |
agent_data_plane::internal::initialize_and_launch_runtime |
-22.07 KiB | 2 |
agent_data_plane::internal::create_internal_supervisor |
+16.17 KiB | 1 |
saluki_app::memory::MemoryBoundsConfiguration |
-13.70 KiB | 1 |
agent_data_plane::internal::control_plane |
-12.33 KiB | 26 |
std |
-10.50 KiB | 52 |
anyhow |
+8.26 KiB | 30 |
agent_data_plane::cli::run |
+8.04 KiB | 8 |
[sections] |
+7.68 KiB | 10 |
saluki_core::runtime::process |
+6.71 KiB | 6 |
agent_data_plane::internal::observability |
+5.60 KiB | 16 |
saluki_core::topology::running |
+5.52 KiB | 2 |
saluki_app::metrics::collect_runtime_metrics |
-4.74 KiB | 1 |
tokio |
-3.95 KiB | 109 |
saluki_core::runtime::restart |
+3.50 KiB | 7 |
tracing_core |
+2.24 KiB | 14 |
saluki_health::Runner::run |
+1.91 KiB | 8 |
hashbrown |
+1.81 KiB | 8 |
saluki_health::RunnerGuard |
+1.71 KiB | 3 |
Detailed Symbol Changes
FILE SIZE VM SIZE
-------------- --------------
+2.6% +104Ki +2.1% +76.7Ki [1116 Others]
[NEW] +59.2Ki [NEW] +58.9Ki _<agent_data_plane::internal::control_plane::PrivilegedApiWorker as saluki_core::runtime::supervisor::Supervisable>::initialize::_{{closure}}::h11fe1c9a197d4bbb
[NEW] +21.3Ki [NEW] +21.1Ki _<agent_data_plane::internal::control_plane::UnprivilegedApiWorker as saluki_core::runtime::supervisor::Supervisable>::initialize::_{{closure}}::hd516f5c8e792be07
[NEW] +18.6Ki [NEW] +18.4Ki saluki_app::api::APIBuilder::serve::_{{closure}}::hc67130ad013550cb
[NEW] +16.2Ki [NEW] +16.1Ki saluki_core::runtime::supervisor::WorkerState::add_worker::h2f57c36c7d6a6d25
[NEW] +16.2Ki [NEW] +16.0Ki agent_data_plane::internal::create_internal_supervisor::_{{closure}}::hc50f0c81432105dc
[NEW] +15.1Ki [NEW] +15.0Ki _<core::pin::Pin<P> as core::future::future::Future>::poll::hf993bf6d214be6bb
[NEW] +11.0Ki [NEW] +10.9Ki std::sys::backtrace::__rust_begin_short_backtrace::h7db93e33ffed5b3c
[NEW] +10.7Ki [NEW] +10.6Ki <saluki_core::data_model::event::Event as core::clone::Clone>::clone.10470
[NEW] +10.5Ki [NEW] +10.4Ki saluki_health::Runner::run::_{{closure}}::h99455b3fddb78a9b
[NEW] +9.47Ki [NEW] +9.34Ki saluki_core::runtime::supervisor::Supervisor::run_inner::_{{closure}}::hf3a194df28b5bfa6
[NEW] +6.80Ki [NEW] +6.66Ki saluki_core::runtime::supervisor::WorkerState::shutdown_workers::_{{closure}}::he6d5081c217bc272
[NEW] +6.24Ki [NEW] +6.10Ki saluki_core::runtime::supervisor::WorkerState::shutdown_workers::_{{closure}}::h1d900b0791e57095
[DEL] -8.42Ki [DEL] -8.33Ki std::sys::backtrace::__rust_begin_short_backtrace::h61680b9753eb2342
[DEL] -9.20Ki [DEL] -9.10Ki saluki_health::Runner::run::_{{closure}}::h36f8d77002f294fa
[DEL] -10.7Ki [DEL] -10.6Ki <saluki_core::data_model::event::Event as core::clone::Clone>::clone.10695
[DEL] -13.7Ki [DEL] -13.6Ki saluki_app::memory::MemoryBoundsConfiguration::try_from_config::h331a424a41827053
[DEL] -18.0Ki [DEL] -17.8Ki agent_data_plane::internal::control_plane::spawn_control_plane::_{{closure}}::_{{closure}}::h9b9375d6b068ec9c
[DEL] -18.4Ki [DEL] -18.2Ki agent_data_plane::internal::initialize_and_launch_runtime::_{{closure}}::h3544d2e34d6be2ff
[DEL] -18.7Ki [DEL] -18.6Ki saluki_app::api::APIBuilder::serve::_{{closure}}::hfe866520c248a5c1
[DEL] -84.5Ki [DEL] -84.4Ki agent_data_plane::internal::control_plane::spawn_control_plane::_{{closure}}::hd6ee71eb8e3b5d42
+0.4% +124Ki +0.4% +95.6Ki TOTAL
Regression Detector (Agent Data Plane)Regression Detector ResultsRun ID: d995e586-47e8-4142-8e35-9972f8da879d Baseline: d3ab905 ❌ Experiments with retried target crashesThis is a critical error. One or more replicates failed with a non-zero exit code. These replicates may have been retried. See Replicate Execution Details for more information.
Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | otlp_ingest_logs_5mb_memory | memory utilization | +1.68 | [+1.22, +2.13] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_cpu | % cpu utilization | +0.16 | [-4.92, +5.23] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_throughput | ingress throughput | +0.01 | [-0.12, +0.15] | 1 | (metrics) (profiles) (logs) |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | otlp_ingest_metrics_5mb_memory | memory utilization | +2.29 | [+2.03, +2.56] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_cpu | % cpu utilization | +1.82 | [-52.02, +55.67] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_memory | memory utilization | +1.68 | [+1.22, +2.13] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_memory | memory utilization | +1.02 | [+0.83, +1.20] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_memory | memory utilization | +0.99 | [+0.80, +1.17] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_medium | memory utilization | +0.94 | [+0.75, +1.13] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_low | memory utilization | +0.83 | [+0.67, +1.00] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_cpu | % cpu utilization | +0.53 | [-0.86, +1.92] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_throughput | ingress throughput | +0.50 | [+0.38, +0.63] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_memory | memory utilization | +0.50 | [+0.25, +0.75] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_memory | memory utilization | +0.49 | [+0.30, +0.68] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_idle | memory utilization | +0.44 | [+0.40, +0.48] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_ultraheavy | memory utilization | +0.32 | [+0.20, +0.45] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_memory | memory utilization | +0.26 | [+0.06, +0.46] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_memory | memory utilization | +0.22 | [+0.04, +0.40] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_heavy | memory utilization | +0.19 | [+0.06, +0.32] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_cpu | % cpu utilization | +0.16 | [-4.92, +5.23] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_throughput | ingress throughput | +0.02 | [-0.03, +0.06] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_throughput | ingress throughput | +0.01 | [-0.12, +0.15] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_throughput | ingress throughput | +0.00 | [-0.05, +0.05] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_throughput | ingress throughput | +0.00 | [-0.06, +0.06] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_throughput | ingress throughput | -0.00 | [-0.15, +0.15] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_throughput | ingress throughput | -0.00 | [-0.03, +0.02] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_metrics_5mb_throughput | ingress throughput | -0.01 | [-0.14, +0.12] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_cpu | % cpu utilization | -0.24 | [-52.77, +52.28] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_cpu | % cpu utilization | -0.69 | [-2.91, +1.53] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_cpu | % cpu utilization | -1.02 | [-31.15, +29.12] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_cpu | % cpu utilization | -1.40 | [-7.38, +4.58] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_metrics_5mb_cpu | % cpu utilization | -2.55 | [-7.83, +2.73] | 1 | (metrics) (profiles) (logs) |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | quality_gates_rss_dsd_heavy | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_low | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_medium | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_ultraheavy | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_idle | memory_usage | 10/10 | (metrics) (profiles) (logs) |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Replicate Execution Details
We run multiple replicates for each experiment/variant. However, we allow replicates to be automatically retried if there are any failures, up to 8 times, at which point the replicate is marked dead and we are unable to run analysis for the entire experiment. We call each of these attempts at running replicates a replicate execution. This section lists all replicate executions that failed due to the target crashing or being oom killed.
Note: In the below tables we bucket failures by experiment, variant, and failure type. For each of these buckets we list out the replicate indexes that failed with an annotation signifying how many times said replicate failed with the given failure mode. In the below example the baseline variant of the experiment named experiment_with_failures had two replicates that failed by oom kills. Replicate 0, which failed 8 executions, and replicate 1 which failed 6 executions, all with the same failure mode.
| Experiment | Variant | Replicates | Failure | Logs | Debug Dashboard |
|---|---|---|---|---|---|
| experiment_with_failures | baseline | 0 (x8) 1 (x6) | Oom killed | Debug Dashboard |
The debug dashboard links will take you to a debugging dashboard specifically designed to investigate replicate execution failures.
❌ Retried Normal Replicate Execution Failures (non-profiling)
| Experiment | Variant | Replicates | Failure | Debug Dashboard |
|---|---|---|---|---|
| dsd_uds_10mb_3k_contexts_cpu | baseline | 3 | Failed to shutdown when requested | Debug Dashboard |
| otlp_ingest_logs_5mb_cpu | comparison | 1 | Failed to shutdown when requested | Debug Dashboard |
| otlp_ingest_logs_5mb_throughput | baseline | 7 | Failed to shutdown when requested | Debug Dashboard |
a0e1d19 to
7ca8cbc
Compare
b284244 to
75a8e2c
Compare
7ca8cbc to
7c0a3b6
Compare
75a8e2c to
cc926a0
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 4 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
7c0a3b6 to
a4f5a2b
Compare
2eacb9c to
32b6e9b
Compare
a4f5a2b to
35cd34f
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 4 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
32b6e9b to
999e501
Compare
35cd34f to
092d5cc
Compare
999e501 to
0954b63
Compare
092d5cc to
4c2d4bf
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 4 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
4c2d4bf to
b6e0c79
Compare
0954b63 to
642f2f6
Compare

Summary
This PR moves the initialization of TLS primitives/configuration for the privileged API worker to occur earlier so that the process can fail fast if misconfigured.
Prior to this PR, we were initializing TLS primitives/configuration during the initialization of the worker, which could mean errors or delays during loading which might cause downstream issues in the supervisor. Instead, we want to do this once at process start and surface any of the resulting errors so that we can fail fast and loudly since there's not likely anything we can do about it.
Change Type
How did you test this PR?
Built and ran ADP and ensured it still properly initialized and ran the privileged API.
References
AGTMETRICS-393