agents: fix http_client percentile calculation#424
agents: fix http_client percentile calculation#424santigimeno merged 1 commit intonode-v24.x-nsolid-v6.xfrom
Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review infoConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
WalkthroughCorrects HTTP client percentile assignments by swapping the 0.5 and 0.99 quantile mappings and updates tests to validate p99 ≥ p50, target HTTP in TestClient, and emit HTTP traces before metrics collection. Changes
Sequence Diagram(s)(Skipped — changes are a bug fix and test updates without new multi-component control flow.) Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/agents/test-grpc-metrics.mjs`:
- Around line 243-245: The warm-up loop calls the async TestClient.trace method
(child.trace) without awaiting, causing flaky metrics snapshots; update the loop
that currently invokes child.trace 10 times to await each call (e.g., use "await
child.trace('http')" inside the for-loop or map to Promise.all with awaited
results) so traces complete before metrics are sampled, ensuring deterministic
test behavior.
39c100e to
be716e6
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/agents/test-grpc-metrics.mjs`:
- Around line 199-200: The assertion comparing p99 and p50 can run on summaries
with no data and fail because quantileValues[].value may be undefined; update
the check in the test that uses dataPoint.quantileValues to first verify both
quantileValues[0].value and quantileValues[1].value are present (or only run the
comparison for the warmed-up metric name "httpClient") before asserting
assert.ok(dataPoint.quantileValues[0].value >=
dataPoint.quantileValues[1].value, ...); follow the same truthy-guarding pattern
used around validateNumber to skip the comparison when values are falsy.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
agents/otlp/src/otlp_common.cctest/agents/test-grpc-metrics.mjs
🚧 Files skipped from review as they are similar to previous changes (1)
- agents/otlp/src/otlp_common.cc
be716e6 to
369ce34
Compare
p50 and p99 where flipped.
Improved
test/agents/test-grpc-metrics.mjsto make sure we don't regress.Fixes: #423
Summary by CodeRabbit
Bug Fixes
Tests