Skip to content

fix(contrib/aws/datadog-lambda): collapse multi-line JSON before sorting log comparison#4683

Open
joeyzhao2018 wants to merge 2 commits intomainfrom
joey/fix-lambda-integration-test
Open

fix(contrib/aws/datadog-lambda): collapse multi-line JSON before sorting log comparison#4683
joeyzhao2018 wants to merge 2 commits intomainfrom
joey/fix-lambda-integration-test

Conversation

@joeyzhao2018
Copy link
Copy Markdown
Contributor

What does this PR do?

  • Fix flaky lambda integration test where log snapshot comparison fails because identical log lines appear at different
    positions across runs (non-deterministic Lambda/CloudWatch log ordering)
    • The previous fix (fix(contrib/aws/datadog-lambda): sort the logs to compare #4677) sorted both sides line-by-line before diffing, but this fragmented multi-line pretty-printed JSON
      blocks — lines like {, }, "v": 1 got scattered by sort, causing diff to misalign even when the logical content was identical
    • Before sorting, collapse multi-line JSON blocks into single lines using a perl one-liner, so each logical log entry is a
      single sortable unit

…ing log comparison

The previous line-level sort fragmented multi-line JSON blocks, causing
diff to misalign identical log entries that appeared at different positions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@joeyzhao2018 joeyzhao2018 requested review from a team as code owners April 20, 2026 18:35
@joeyzhao2018 joeyzhao2018 requested a review from lym953 April 20, 2026 18:35
@github-actions github-actions bot added the apm:ecosystem contrib/* related feature requests or bugs label Apr 20, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 52285cdd5b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread contrib/aws/datadog-lambda-go/test/integration_tests/run_integration_tests.sh Outdated
@datadog-prod-us1-3
Copy link
Copy Markdown

datadog-prod-us1-3 bot commented Apr 20, 2026

Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 100.00%
Overall Coverage: 60.87% (-0.05%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: 94ba7a0 | Docs | Datadog PR Page | Give us feedback!

Use Perl's recursive pattern (?1) to match balanced braces at any
nesting depth, not just one level.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@pr-commenter
Copy link
Copy Markdown

pr-commenter bot commented Apr 20, 2026

Benchmarks

Benchmark execution time: 2026-04-20 20:17:28

Comparing candidate commit 94ba7a0 in PR branch joey/fix-lambda-integration-test with baseline commit 21bb70d in branch main.

Found 0 performance improvements and 1 performance regressions! Performance is the same for 205 metrics, 7 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

scenario:BenchmarkParallelMetrics/count/handle-reused-25

  • 🟥 execution_time [+1.967ns; +8.735ns] or [+2.541%; +11.286%]

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 61.56%. Comparing base (21bb70d) to head (94ba7a0).

Additional details and impacted files

see 448 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

apm:ecosystem contrib/* related feature requests or bugs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants