[PB-5902]: Add global retry with backoff to HttpClient for rate limit handling#370
[PB-5902]: Add global retry with backoff to HttpClient for rate limit handling#370
Conversation
…dling Introduces retryWithBackoff utility that automatically retries requests on 429 responses using the retry-after header delay. Integrates it into all HttpClient HTTP methods via a withRetry wrapper, with configurable options (maxRetries, onRetry) exposed through setGlobalRetryOptions.
CandelR
left a comment
There was a problem hiding this comment.
If I am not mistaken, this adds the retry as mandatory to all requests. Modify it so that it is disabled by default and can be implemented by projects that want it, as it can cause problems.
Add more tests to check different cases.
| return error.status === HTTP_STATUS_TOO_MANY_REQUESTS; | ||
| }; | ||
|
|
||
| const extractRetryAfter = (error: ErrorWithStatus): number | undefined => { |
There was a problem hiding this comment.
Maybe we will need a retry after limit to avoid clients wait infinite time. When I was checking the backend rate limits on mobile, there was a server failure and Cloudflare returned a 6-month reset. Perhaps a default maximum time should be added (which should also be configurable) to avoid this potential error
src/shared/http/retryWithBackoff.ts
Outdated
| } | ||
| } | ||
|
|
||
| return await fn(); |
There was a problem hiding this comment.
this will produce an additional retry. The retries number is maxRetries+1 if I'm not mistaken
There was a problem hiding this comment.
Mmm, nope, because the condition is attempt < opts.maxRetries, not attempt <= opts.maxRetries. With this, the loop will finish with one try left, and that’s where it happens
There was a problem hiding this comment.
outside of the loop you have added an additional call. That means that in the worst case scenario, it is called maxretrires +1.
Although now that I look back, I hadn't realised that the default call, which is not a retry, is included here. It seems a bit strange to me to see it implemented this way, as it gives the impression that all calls are retries.
If the retries are at 0, it does not enter the loop and executes the one outside; if they are at 1, it enters the loop, executes the first one, and if everything goes well, it finishes, and if the server returns a 429, the next iteration of the loop is not executed and it does the ‘retry’ outside.
But maybe that's just my impression and it's fine as it is. If you all agree, I won't say anything else
There was a problem hiding this comment.
You’re right, that’s something I didn’t realize until now, so I’ve changed my approach. What do you think?
which projects wont need it? @CandelR For me, its better if its enabled by default (or at least, is easy to enable) maybe i miss something but this retry-mechanism looks good and is needed for everyone right? |
In mobile, for example, retry is already implemented. If it is added here silently by default in a minor update and someone does not notice, it could cause unwanted behaviour. The entire implementation would have to be removed if the SDK is updated. |
Then we should know first which projects will be affected to decide if it affects more enabled or disabled by default. We should also not make this change silent in a patch version, maybe we should also discuss it on the drive channel so everyone knows about this (and change this in a minor version). In any case I agree with you that we would need an easy way to enable (or disable) it, maybe a new optional property at the ApiSecurity for example |
The globalRetry option is now disabled by default. See the documentation above for details 🚀 |
1288448 to
8ca0b74
Compare
Adds a
retryWithBackoffutility that automatically retries HTTP requests when a 429 – Too Many Requests response is received.Key points
retry-afterresponse header.HttpClientHTTP methods via anexecutewrapper.HttpClient.enableGlobalRetry.Configuration examples
Behavior details