Skip to content

feat(ddtrace/tracer): add span lifecycle benchmarks#4682

Open
darccio wants to merge 1 commit intomainfrom
dario.castane/langplat-59/validation
Open

feat(ddtrace/tracer): add span lifecycle benchmarks#4682
darccio wants to merge 1 commit intomainfrom
dario.castane/langplat-59/validation

Conversation

@darccio
Copy link
Copy Markdown
Member

@darccio darccio commented Apr 20, 2026

What does this PR do?

Adds BenchmarkSpanLifecycle — an end-to-end benchmark for the span hot path (start → sample → tag → finish) with settings that exercise all 6 optimization PRs merged across v2.6–v2.8:

Three sub-benchmarks focus on root spans where all 6 optimizations converge (child spans inherit sampling priority and skip 4 of 6 paths):

  • Minimal — root span with 5 Tag() options, baseline signal.
  • TagHeavy — root span with 12 tags, amplifies setTags batch lock improvement.
  • MultiService — rotating service names, exercises serviceEnvKey map lookups with different keys.

Motivation

No existing benchmark exercised the full span lifecycle with sampling enabled, global tags, git metadata, and 128-bit trace IDs — the conditions needed to measure these optimizations together. Existing benchmarks either disable sampling (WithSamplerRate(0)) or measure isolated components.

Results (v2.5.0 → v2.8.0, 10 runs, Apple M1 Max)

                              │ v2.5.0         │             v2.8.0                  │
                              │     sec/op     │    sec/op     vs base               │
SpanLifecycle/Minimal-10           4.354µ ± 6%   4.716µ ± 21%  +8.33% (p=0.035 n=10)
SpanLifecycle/TagHeavy-10          5.432µ ± 1%   5.298µ ±  5%  -2.47% (p=0.042 n=10)
SpanLifecycle/MultiService-10      3.941µ ± 4%   4.130µ ± 10%  +4.78% (p=0.029 n=10)
geomean                            4.534µ        4.690µ        +3.45%

                              │ v2.5.0         │             v2.8.0                  │
                              │      B/op      │     B/op      vs base               │
SpanLifecycle/Minimal-10          6.933Ki ± 2%   6.698Ki ± 2%  -3.38% (p=0.000 n=10)
SpanLifecycle/TagHeavy-10        10.025Ki ± 5%   9.799Ki ± 4%  -2.26% (p=0.005 n=10)
SpanLifecycle/MultiService-10     7.471Ki ± 9%   7.248Ki ± 7%  -2.99% (p=0.008 n=10)
geomean                           8.037Ki        7.806Ki       -2.88%

                              │ v2.5.0         │            v2.8.0                   │
                              │   allocs/op    │ allocs/op   vs base                 │
SpanLifecycle/Minimal-10            81.00 ± 0%   77.00 ± 1%  -4.94% (p=0.000 n=10)
SpanLifecycle/TagHeavy-10           98.00 ± 1%   93.00 ± 1%  -5.10% (p=0.000 n=10)
SpanLifecycle/MultiService-10       78.00 ± 0%   73.00 ± 1%  -6.41% (p=0.000 n=10)
geomean                             85.23        80.56       -5.49%

The 6 PRs delivered ~3% fewer bytes and ~5% fewer allocs. CPU regressed ~3-8% due to the internalConfig refactor (#4645, #4559, #4653) which replaced direct field reads with RLock/defer/RUnlock accessor methods (~8 mutex round-trips per span). TagHeavy is the only sub-benchmark showing a CPU improvement (-2.47%), confirming the setTags batch lock optimization outweighs the config overhead when tag count is high.

Reviewer's Checklist

  • System-Tests covering this feature have been added and enabled with the va.b.c-dev version tag.
  • There is a benchmark for any new code, or changes to existing code.
  • If this interacts with the agent in a new way, a system test has been added.
  • New code is free of linting errors. You can check this by running make lint locally.
  • New code doesn't break existing tests. You can check this by running make test locally.
  • Add an appropriate team label so this PR gets put in the right place for the release notes.
  • All generated files are up to date. You can check this by running make generate locally.

Unsure? Have a question? Request a review!

@darccio darccio force-pushed the dario.castane/langplat-59/validation branch from b627ddf to a2cb245 Compare April 20, 2026 18:02
@datadog-datadog-prod-us1
Copy link
Copy Markdown

datadog-datadog-prod-us1 bot commented Apr 20, 2026

Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 100.00%
Overall Coverage: 60.90% (+4.18%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: a2cb245 | Docs | Datadog PR Page | Give us feedback!

@darccio darccio marked this pull request as ready for review April 20, 2026 18:06
@darccio darccio requested a review from a team as a code owner April 20, 2026 18:06
@darccio
Copy link
Copy Markdown
Member Author

darccio commented Apr 20, 2026

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a2cb24562b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +24 to +25
b.Setenv("DD_GIT_REPOSITORY_URL", "https://github.com/DataDog/dd-trace-go")
b.Setenv("DD_GIT_COMMIT_SHA", "abc123def456789abc123def456789abc123def4")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Refresh git metadata after setting benchmark env

When this benchmark is run in the normal go test -bench flow, package tests such as TestGitMetadata can leave the internal git metadata cache already initialized/refreshed under different env values; GetGitMetadataTags is cached and these later Setenv calls are not observed unless internal.RefreshGitMetadataTags() is called. In that scenario the benchmark silently runs with stale or empty _dd.git.* tags, so it does not actually exercise the git-metadata hot path it is intended to measure.

Useful? React with 👍 / 👎.

@pr-commenter
Copy link
Copy Markdown

pr-commenter bot commented Apr 20, 2026

Benchmarks

Benchmark execution time: 2026-04-20 19:17:28

Comparing candidate commit a2cb245 in PR branch dario.castane/langplat-59/validation with baseline commit 21bb70d in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 156 metrics, 8 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant