Fix rlp encoding bug when submitting partial uncomitted headers #56
Fix rlp encoding bug when submitting partial uncomitted headers #56wanwiset25 merged 5 commits intomasterfrom
Conversation
📝 WalkthroughWalkthroughThe changes update configuration fallback logic for wallet keys, add conditional initialization checks for XDC-Zero services, refactor server startup with a retry loop for bootstrap failures, standardize HTTP headers across provider services, and ensure async/await consistency for contract method calls. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
src/processors/lite.ts (1)
105-105: Optional: extract the base64→hex conversion into a shared helper.The expression
"0x" + Buffer.from(encodedRLP, "base64").toString("hex")now appears in four places (src/service/mainnet/index.ts:106,:394,src/service/subnet/index.ts:277, and here). A single helper would prevent future drift between call sites — this exact kind of inconsistency is what caused the original bug.♻️ Proposed helper
// e.g. in src/utils/index.ts export const rlpBase64ToHex = (encodedRLP: string): string => "0x" + Buffer.from(encodedRLP, "base64").toString("hex");Then at each call site:
- "0x" + Buffer.from(item.encodedRLP, "base64").toString("hex") + rlpBase64ToHex(item.encodedRLP)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/processors/lite.ts` at line 105, Extract the repeated expression "0x" + Buffer.from(encodedRLP, "base64").toString("hex") into a single exported helper (e.g. rlpBase64ToHex(encodedRLP: string): string) in a shared utils module and replace all inline uses with calls to that helper; update the call site in src/processors/lite.ts (the return statement currently using Buffer.from(item.encodedRLP,...)), plus the occurrences in src/service/mainnet/index.ts (the two occurrences) and src/service/subnet/index.ts, to call rlpBase64ToHex(item.encodedRLP) so all sites use the same implementation.src/processors/zero.ts (1)
21-42: Consider extracting the shared init boilerplate.
zero.tsandreverseZero.tsnow carry near-identicalinit()bodies (guard onconfig.xdcZero.*Enabled, fire-and-forgetzeroService.init().catch(...), same queue.process wiring, same error handling). A small base helper or a parameterized method onBaseProcessor/ZeroServicewould remove the duplication and make future fixes (e.g. init-awaiting) apply to both paths at once.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/processors/zero.ts` around lines 21 - 42, Both zero.ts and reverseZero.ts duplicate the init() boilerplate (checking config.xdcZero.isEnabled, fire-and-forget zeroService.init().catch(...), and wiring queue.process with identical try/catch/reset handling), so extract this into a shared helper on BaseProcessor (or a small utility) such as setupPeriodicProcessor(service, enabledFlag, jobLabel, processEventFn) that centralizes the guard, the zeroService.init().catch(...) behavior, and the queue.process handler; update zero.ts and reverseZero.ts to call that new method (referencing init(), zeroService.init(), queue.process, processEvent(), and reset()) so both processors reuse the same implementation and future fixes apply to both.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/processors/reverseZero.ts`:
- Around line 21-27: The init() method is fire-and-forgetting
zeroService.init(), which races with queue.process(...) and lets processEvent()
run before viem clients are ready and swallows init errors; change init() to
await zeroService.init() (or set an initializing/ready flag and defer
queue.process registration until success), ensure any error from
zeroService.init() is logged (update message from "Fail to init" to "Failed to
init") and prevent processEvent()/reset() from proceeding when the service isn't
ready (short-circuit processEvent() by checking the ready flag or disable the
queue on init failure); update references: init(), zeroService.init(),
queue.process(...), processEvent(), reset(), getPayloads(), and the logger
message string.
In `@src/server.ts`:
- Around line 31-43: The bootstrap retry logic in bootstrap() currently retries
forever at a fixed interval (config.reBootstrapWaitingTime) and also relies on
processors.reset() which may resolve prematurely because it doesn't await all
per-mode async work; fix by (1) updating processors.reset() to return a promise
that waits for all mode cleanup tasks (use await Promise.all(modes.map(...)) and
ensure any per-branch errors are re-thrown) so callers see failures, and (2)
change bootstrap() to implement exponential backoff with a maximum backoff cap
and a max retry count (use an attempts counter, multiply delay each retry up to
a cap, and after max attempts log an error and exit the process to allow
external orchestrator restart), keeping references to bootstrap,
processors.reset, and config.reBootstrapWaitingTime for locating changes.
- Around line 45-50: processors.init(serverAdapter) is synchronous and can throw
before bootstrap()'s try/catch runs, so wrap the call in a try/catch inside the
app.listen callback (or move the processors.init(serverAdapter) call into
bootstrap()) so all startup errors are handled uniformly; on catch, log the
error via logger.error with context and perform the same shutdown/exit path
bootstrap() uses (e.g., stop server and process.exit or call the existing error
handler) to ensure the HTTP server doesn't remain listening when initialization
fails.
---
Nitpick comments:
In `@src/processors/lite.ts`:
- Line 105: Extract the repeated expression "0x" + Buffer.from(encodedRLP,
"base64").toString("hex") into a single exported helper (e.g.
rlpBase64ToHex(encodedRLP: string): string) in a shared utils module and replace
all inline uses with calls to that helper; update the call site in
src/processors/lite.ts (the return statement currently using
Buffer.from(item.encodedRLP,...)), plus the occurrences in
src/service/mainnet/index.ts (the two occurrences) and
src/service/subnet/index.ts, to call rlpBase64ToHex(item.encodedRLP) so all
sites use the same implementation.
In `@src/processors/zero.ts`:
- Around line 21-42: Both zero.ts and reverseZero.ts duplicate the init()
boilerplate (checking config.xdcZero.isEnabled, fire-and-forget
zeroService.init().catch(...), and wiring queue.process with identical
try/catch/reset handling), so extract this into a shared helper on BaseProcessor
(or a small utility) such as setupPeriodicProcessor(service, enabledFlag,
jobLabel, processEventFn) that centralizes the guard, the
zeroService.init().catch(...) behavior, and the queue.process handler; update
zero.ts and reverseZero.ts to call that new method (referencing init(),
zeroService.init(), queue.process, processEvent(), and reset()) so both
processors reuse the same implementation and future fixes apply to both.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6cf2b77f-e1ab-41f8-90e0-726b1a25e51b
📒 Files selected for processing (7)
src/config.tssrc/processors/lite.tssrc/processors/reverseZero.tssrc/processors/zero.tssrc/server.tssrc/service/mainnet/index.tssrc/service/subnet/index.ts
| init() { | ||
| this.logger.info("Initialising Reverse-XDC-Zero"); | ||
| this.zeroService.init(); | ||
| if (config.xdcZero.isReverseEnabled) { | ||
| this.zeroService.init().catch((error) => { | ||
| this.logger.error("Fail to init Reverse-XDC-Zero service", { message: error.message }); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Fire-and-forget zeroService.init() creates a race with the first cron tick.
init() returns synchronously while zeroService.init() runs in the background. queue.process(...) is registered immediately, and once reset() is called the first cron execution (cadence: every 10s) can fire before zeroService.init() resolves — processEvent() will then invoke getPayloads() on still-undefined viem clients, throw, log "Fail to process reverse-xdc-zero job", and self-requeue via this.reset(). Also, on init failure the error is swallowed by .catch() and the processor keeps ticking forever against a permanently broken service.
Consider tracking init state and short-circuiting processEvent() (or delaying queue registration) until zeroService.init() has resolved successfully.
🛠️ Sketch: gate processing on successful init
- init() {
- this.logger.info("Initialising Reverse-XDC-Zero");
- if (config.xdcZero.isReverseEnabled) {
- this.zeroService.init().catch((error) => {
- this.logger.error("Fail to init Reverse-XDC-Zero service", { message: error.message });
- });
- }
+ private initPromise?: Promise<void>;
+ init() {
+ this.logger.info("Initialising Reverse-XDC-Zero");
+ if (config.xdcZero.isReverseEnabled) {
+ this.initPromise = this.zeroService.init().catch((error) => {
+ this.logger.error("Failed to init Reverse-XDC-Zero service", { message: error.message });
+ throw error;
+ });
+ }
this.queue.process(async (_, done) => {
this.logger.info("⏰ Executing reverse-xdc-zero periodically");
try {
+ if (!this.initPromise) { return done(null, "reverse-xdc-zero disabled"); }
+ await this.initPromise;
done(null, await this.processEvent());Minor nit: "Fail to init" reads better as "Failed to init".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/processors/reverseZero.ts` around lines 21 - 27, The init() method is
fire-and-forgetting zeroService.init(), which races with queue.process(...) and
lets processEvent() run before viem clients are ready and swallows init errors;
change init() to await zeroService.init() (or set an initializing/ready flag and
defer queue.process registration until success), ensure any error from
zeroService.init() is logged (update message from "Fail to init" to "Failed to
init") and prevent processEvent()/reset() from proceeding when the service isn't
ready (short-circuit processEvent() by checking the ready flag or disable the
queue on init failure); update references: init(), zeroService.init(),
queue.process(...), processEvent(), reset(), getPayloads(), and the logger
message string.
| const bootstrap = async (): Promise<void> => { | ||
| try { | ||
| await processors.reset(); | ||
| logger.info("Bootstrap complete"); | ||
| } catch (error) { | ||
| logger.error( | ||
| `Bootstrap failed, retrying in ${config.reBootstrapWaitingTime}ms`, | ||
| { message: error.message } | ||
| ); | ||
| await new Promise((r) => setTimeout(r, config.reBootstrapWaitingTime)); | ||
| return bootstrap(); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Retry loop has no cap or backoff escalation, and trusts a reset() that may resolve prematurely.
Two concerns:
-
Unbounded identical-interval retry. On a persistent failure (misconfigured env, dead upstream RPC, Redis unreachable, wrong contract mode)
bootstrap()will retry forever everyconfig.reBootstrapWaitingTimems with no escalation or max attempts. Consider exponential backoff with a cap, and an attempt ceiling after which the process exits (letting the orchestrator restart/alert) rather than silently looping. -
reset()may resolve before child resets complete.Processors.reset()usesmodes.map(async ...)withoutPromise.all(see src/processors/index.ts:70-98), so the returned promise settles as soon asPromise.all(p.clean())finishes — failures inside the switch (e.g.getRunningModes()throwing from a contract call) are per-branch and not bubbled to thistry/catch."Bootstrap complete"can be logged while individual processor resets are still in-flight or have already failed unobserved. The retry guarantee you're adding here is only as strong asreset()'s awaiting — worth fixing upstream (await Promise.all(modes.map(...))) so this bootstrap wrapper is actually meaningful.
💡 Suggested structure
-const bootstrap = async (): Promise<void> => {
- try {
- await processors.reset();
- logger.info("Bootstrap complete");
- } catch (error) {
- logger.error(
- `Bootstrap failed, retrying in ${config.reBootstrapWaitingTime}ms`,
- { message: error.message }
- );
- await new Promise((r) => setTimeout(r, config.reBootstrapWaitingTime));
- return bootstrap();
- }
-};
+const MAX_BOOTSTRAP_ATTEMPTS = 10;
+const bootstrap = async (attempt = 1): Promise<void> => {
+ try {
+ await processors.reset();
+ logger.info("Bootstrap complete");
+ } catch (error) {
+ if (attempt >= MAX_BOOTSTRAP_ATTEMPTS) {
+ logger.fatal("Bootstrap failed after max attempts, exiting", { message: error.message });
+ process.exit(1);
+ }
+ const delay = Math.min(config.reBootstrapWaitingTime * 2 ** (attempt - 1), 30 * 60 * 1000);
+ logger.error(`Bootstrap failed (attempt ${attempt}), retrying in ${delay}ms`, { message: error.message });
+ await new Promise((r) => setTimeout(r, delay));
+ return bootstrap(attempt + 1);
+ }
+};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const bootstrap = async (): Promise<void> => { | |
| try { | |
| await processors.reset(); | |
| logger.info("Bootstrap complete"); | |
| } catch (error) { | |
| logger.error( | |
| `Bootstrap failed, retrying in ${config.reBootstrapWaitingTime}ms`, | |
| { message: error.message } | |
| ); | |
| await new Promise((r) => setTimeout(r, config.reBootstrapWaitingTime)); | |
| return bootstrap(); | |
| } | |
| }; | |
| const MAX_BOOTSTRAP_ATTEMPTS = 10; | |
| const bootstrap = async (attempt = 1): Promise<void> => { | |
| try { | |
| await processors.reset(); | |
| logger.info("Bootstrap complete"); | |
| } catch (error) { | |
| if (attempt >= MAX_BOOTSTRAP_ATTEMPTS) { | |
| logger.fatal("Bootstrap failed after max attempts, exiting", { message: error.message }); | |
| process.exit(1); | |
| } | |
| const delay = Math.min(config.reBootstrapWaitingTime * 2 ** (attempt - 1), 30 * 60 * 1000); | |
| logger.error(`Bootstrap failed (attempt ${attempt}), retrying in ${delay}ms`, { message: error.message }); | |
| await new Promise((r) => setTimeout(r, delay)); | |
| return bootstrap(attempt + 1); | |
| } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server.ts` around lines 31 - 43, The bootstrap retry logic in bootstrap()
currently retries forever at a fixed interval (config.reBootstrapWaitingTime)
and also relies on processors.reset() which may resolve prematurely because it
doesn't await all per-mode async work; fix by (1) updating processors.reset() to
return a promise that waits for all mode cleanup tasks (use await
Promise.all(modes.map(...)) and ensure any per-branch errors are re-thrown) so
callers see failures, and (2) change bootstrap() to implement exponential
backoff with a maximum backoff cap and a max retry count (use an attempts
counter, multiply delay each retry up to a cap, and after max attempts log an
error and exit the process to allow external orchestrator restart), keeping
references to bootstrap, processors.reset, and config.reBootstrapWaitingTime for
locating changes.
| app.listen(config.port, async () => { | ||
| logger.info(`Relayer running on port ${config.port}`); | ||
| await checkConnection(); | ||
| await processors.init(serverAdapter).reset(); | ||
| processors.init(serverAdapter); | ||
| await bootstrap(); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm processors.init signature and whether any p.init() can throw synchronously.
fd -t f 'index.ts' src/processors
rg -nP -C3 '\binit\s*\(' src/processorsRepository: XinFinOrg/XDC-Relayer
Length of output: 3514
🏁 Script executed:
#!/bin/bash
# Check full implementation of each processor's init() method
for file in src/processors/zero.ts src/processors/reverseZero.ts src/processors/lite.ts src/processors/full.ts src/processors/reverseFull.ts; do
if [ -f "$file" ]; then
echo "=== $file ==="
grep -n "init()" "$file" -A 20 | head -30
fi
doneRepository: XinFinOrg/XDC-Relayer
Length of output: 3819
🏁 Script executed:
#!/bin/bash
# Check the full processors.init implementation and bootstrap function
echo "=== processors.init() full implementation ==="
sed -n '51,70p' src/processors/index.ts
echo -e "\n=== bootstrap() function ==="
rg -n "async.*bootstrap|function bootstrap" src/server.ts -A 20Repository: XinFinOrg/XDC-Relayer
Length of output: 590
🏁 Script executed:
#!/bin/bash
# Check if there's error handling in the listener callback
sed -n '40,60p' src/server.tsRepository: XinFinOrg/XDC-Relayer
Length of output: 616
🏁 Script executed:
#!/bin/bash
# Search for bootstrap function definition
rg -n "bootstrap" src/server.ts
rg -n "function bootstrap|const bootstrap|export.*bootstrap" src/ -t tsRepository: XinFinOrg/XDC-Relayer
Length of output: 230
🏁 Script executed:
#!/bin/bash
# Check the full listener callback for error handling
sed -n '43,52p' src/server.tsRepository: XinFinOrg/XDC-Relayer
Length of output: 254
🏁 Script executed:
#!/bin/bash
# Check if BullBoard or BullAdapter are imported and what they do
rg -n "import.*BullBoard|import.*BullAdapter" src/processors/index.ts
head -10 src/processors/index.tsRepository: XinFinOrg/XDC-Relayer
Length of output: 608
🏁 Script executed:
#!/bin/bash
# Get the full bootstrap function implementation
sed -n '31,44p' src/server.tsRepository: XinFinOrg/XDC-Relayer
Length of output: 449
🏁 Script executed:
#!/bin/bash
# Check if listener callback has a catch handler or error handler
sed -n '43,52p' src/server.ts
# Also check for any error event handlers on app.listen
rg -n "app\.on|\.on\(" src/server.ts | head -20Repository: XinFinOrg/XDC-Relayer
Length of output: 254
Wrap processors.init(serverAdapter) in a try/catch block or move it into bootstrap().
processors.init() is synchronous and can throw from Bull queue/adapter initialization or board creation. If it throws before bootstrap() executes, the exception escapes the try/catch in bootstrap(), leaving the HTTP server listening with the relayer unconfigured. All startup failures should be caught and handled uniformly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server.ts` around lines 45 - 50, processors.init(serverAdapter) is
synchronous and can throw before bootstrap()'s try/catch runs, so wrap the call
in a try/catch inside the app.listen callback (or move the
processors.init(serverAdapter) call into bootstrap()) so all startup errors are
handled uniformly; on catch, log the error via logger.error with context and
perform the same shutdown/exit path bootstrap() uses (e.g., stop server and
process.exit or call the existing error handler) to ensure the HTTP server
doesn't remain listening when initialization fails.
changed
return "0x" + item.encodedRLP;to
return "0x" + Buffer.from(item.encodedRLP, "base64").toString("hex");The bug is in lite relayer processing, this branch of code is for checkpointing partial uncomitted headers.
It is rarely used since normally headers are batch submitted and CSC will be in comitted state.
Summary by CodeRabbit
Release Notes