Skip to content

Fix GitLab pipeline creation failure caused by microbenchmarks-pr-comment needs#5522

Merged
ivoanjo merged 1 commit intomasterfrom
fix/microbenchmarks-optional-needs
Mar 30, 2026
Merged

Fix GitLab pipeline creation failure caused by microbenchmarks-pr-comment needs#5522
ivoanjo merged 1 commit intomasterfrom
fix/microbenchmarks-optional-needs

Conversation

@p-datadog
Copy link
Copy Markdown
Member

@p-datadog p-datadog commented Mar 27, 2026

What does this PR do?

Adds optional: true to the microbenchmarks-pr-comment job's needs dependency on microbenchmarks in .gitlab/benchmarks.yml.

Motivation:

#5488 added changes: rules to the microbenchmarks job so it's skipped on PRs that don't touch relevant files — but didn't update microbenchmarks-pr-comment, which has needs: [microbenchmarks]. When microbenchmarks is filtered out, GitLab rejects the entire pipeline because a needed job doesn't exist.

This breaks any PR where GitLab's changes: evaluation excludes microbenchmarks — not just docs-only PRs. Both of these PRs are currently stuck with the identical Mosaic error:

  • Document one-pipeline source and provenance in AGENTS.md #5515 — docs-only (AGENTS.md). Expected to be excluded by changes:.
  • DI: add a C extension #5111 — touches ext/, lib/, spec/, yet microbenchmarks is still excluded. GitLab's changes: evaluation can exclude jobs even when matching paths are present (e.g. when the diff base is ambiguous or the branch diverged significantly). This means the needs: without optional: true is a broader failure than just docs-only PRs.

Both show the same Mosaic pipeline creation error:

POST .../pipeline: 400 {message: {base: [
  'microbenchmarks-pr-comment' job needs 'microbenchmarks: [profiling]' job,
  but 'microbenchmarks: [profiling]' does not exist in the pipeline.
  ...
  To need a job that sometimes does not exist in the pipeline, use needs:optional.
]}}

The causal chain:

  1. microbenchmarks has rules: with changes: filters (added in ci: skip microbenchmarks on PRs that don't touch relevant files #5488).
  2. GitLab evaluates changes: and excludes microbenchmarks from the pipeline.
  3. microbenchmarks-pr-comment declares needs: [microbenchmarks] without optional: true.
  4. GitLab validates the DAG and rejects the pipeline with a 400.
  5. No pipeline runs → dd-gitlab/finished is never posted → required check stays pending → PR is blocked.

The fix:

# Before
needs: [microbenchmarks]

# After
needs:
  - job: microbenchmarks
    optional: true

When microbenchmarks is filtered out, microbenchmarks-pr-comment is simply skipped (when: on_success with no upstream dependency to satisfy). When microbenchmarks does run, behavior is unchanged.

Change log entry

None.

Additional Notes:

Introduced by #5488 (merged 2026-03-23). Confirmed that docs-only PRs merged before that date (#5491, #5485) had dd-gitlab/finished: success with no issues.

How to test the change?

Once merged, re-trigger CI on #5515 and #5111 (e.g. push an empty commit). The GitLab pipeline should create successfully and dd-gitlab/finished should resolve.

The `microbenchmarks-pr-comment` job declares `needs: [microbenchmarks]`,
but `microbenchmarks` uses `rules:` with `changes:` filters that exclude
it from pipelines where only documentation files change. GitLab rejects
the entire pipeline with a 400 error when a needed job doesn't exist.

Adding `optional: true` lets GitLab skip the dependency gracefully when
the microbenchmarks job is filtered out, instead of failing pipeline
creation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@p-datadog p-datadog requested a review from a team as a code owner March 27, 2026 17:29
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 27, 2026

Thank you for updating Change log entry section 👏

Visited at: 2026-03-27 17:55:45 UTC

@datadog-prod-us1-3
Copy link
Copy Markdown

datadog-prod-us1-3 bot commented Mar 27, 2026

✅ Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 100.00%
Overall Coverage: 95.15% (-0.01%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: af6f83d | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

@p-datadog p-datadog changed the title Fix GitLab pipeline creation failure for docs-only PRs Fix GitLab pipeline creation failure caused by microbenchmarks-pr-comment needs Mar 27, 2026
@pr-commenter
Copy link
Copy Markdown

pr-commenter bot commented Mar 27, 2026

Benchmarks

Benchmark execution time: 2026-03-27 17:55:01

Comparing candidate commit af6f83d in PR branch fix/microbenchmarks-optional-needs with baseline commit 9c78e44 in branch master.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 46 metrics, 0 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

@p-datadog p-datadog added the AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos label Mar 27, 2026
Copy link
Copy Markdown
Member

@ivoanjo ivoanjo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 LGTM thanks for looking into this.

I can confirm I did run into this on a PR last week and thought "hmmm... this is weird". I then tried to manually trigger the gitlab pipeline and that fixed it so I kinda went "hmmm... will keep an eye on it". So you totally beat me to it.

The fix itself seems reasonable, although like with many of these CI changes, you kinda look at it, shrug, and say, worst thing it doesn't work and we need to change it again. Kinda annoying to validate and all that ;)

@ivoanjo ivoanjo merged commit 855264c into master Mar 30, 2026
358 of 359 checks passed
@ivoanjo ivoanjo deleted the fix/microbenchmarks-optional-needs branch March 30, 2026 07:44
@github-actions github-actions bot added this to the 2.31.0 milestone Mar 30, 2026
@ivoanjo
Copy link
Copy Markdown
Member

ivoanjo commented Mar 30, 2026

I decided to go ahead and merge this since broken CI is really annoying, hope that's ok ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants