Skip to content

feat: add throttle — distributed throttle primitive#63

Closed
freshlogic wants to merge 1 commit intomainfrom
feat/throttle
Closed

feat: add throttle — distributed throttle primitive#63
freshlogic wants to merge 1 commit intomainfrom
feat/throttle

Conversation

@freshlogic
Copy link
Copy Markdown
Member

Summary

Adds pettyCache.throttle(key, { ttl }, fn) — a distributed throttle primitive backed by Redis. Async/await only.

The pattern: first caller for a given key in a ttl window wins the claim and fn runs to completion; subsequent calls within the window are no-ops (return immediately without invoking fn). After the window expires, the next caller can claim again.

Errors thrown by fn propagate to the caller — useful for callers that need to know whether the work succeeded so they can NACK upstream messages, retry, etc.

Why

The first concrete use case is Stores.com's segmentation-service rollup pipeline — many upstream Service Bus events signal "refresh accountId X." We want at most one refresh-message-publish per ~5-min window per account, and we need the publishing caller to know if the publish succeeded (so they can NACK their upstream message on failure).

API

await pettyCache.throttle('refresh-account.123', { ttl: 5 * 60 * 1000 }, async () => {
    const message = serviceBusClient.createMessage({ id: '123' });
    message.scheduledEnqueueTimeUtc = new Date(Date.now() + 5 * 60 * 1000);
    await serviceBusClient.sendMessageAsync(topic, message);
});

First caller in a 5-min window: schedules the deferred message, errors propagate. Subsequent callers: no-op return.

Implementation

Single SETNX with PX TTL:

  • claim succeeds (`SET ... NX PX ttl` returns OK) → `await fn()` and return
  • claim fails (key already present) → return immediately

No mutex, no setTimeout, no UUID, no retries. ~15 lines.

Test plan

  • First call invokes fn and waits for completion
  • Subsequent calls within the window are absorbed
  • After window expires, next call wins again
  • Errors thrown by fn propagate to the caller
  • Different keys are independent
  • Absorbed callers return immediately (don't wait on the winner's fn)

Relationship to PR #62 (debounce)

PR #62 adds a similar debounce primitive (timer-reset semantics, fire-and-forget). This throttle is a different shape — first-call-wins, awaitable, errors propagate. The two have different use-case fits; we may end up keeping both, or closing one. Open here so segmentation can move forward independently.

🤖 Generated with Claude Code

Coalesces calls for the same key across multiple processes via Redis so
that fn runs at most once per ttl window. The first caller in a window
wins the claim and fn runs to completion; subsequent calls are no-ops.
Errors thrown by fn propagate to the caller — useful for callers that
need to know whether the work succeeded so they can NACK upstream
messages, etc.

Implementation is a single SETNX with PX TTL: claim succeeds, fn runs;
claim fails, return immediately.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@coveralls
Copy link
Copy Markdown

Coverage Report for CI Build 24945751484

Coverage decreased (-0.2%) to 99.812%

Details

  • Coverage decreased (-0.2%) from the base build.
  • Patch coverage: 2 uncovered changes across 1 file (37 of 39 lines covered, 94.87%).
  • No coverage regressions found.

Uncovered Changes

File Changed Covered %
index.js 39 37 94.87%

Coverage Regressions

No coverage regressions found.


Coverage Stats

Coverage Status
Relevant Lines: 1188
Covered Lines: 1186
Line Coverage: 99.83%
Relevant Branches: 410
Covered Branches: 409
Branch Coverage: 99.76%
Branches in Coverage %: Yes
Coverage Strength: 49.52 hits per line

💛 - Coveralls

@freshlogic
Copy link
Copy Markdown
Member Author

Closing — this was just a thin sugar wrapper over pettyCache.mutex.lock. Caller can use mutex.lock directly with try/catch on the throw-on-contention path; the extra primitive doesn't earn its keep.

@freshlogic freshlogic closed this Apr 26, 2026
@freshlogic freshlogic deleted the feat/throttle branch April 26, 2026 13:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants