Skip to content

eth/protocols/wit, consensus/bor: WIT2 — BP-signed witness announcements with transitive relay and pre-import serving#2208

Open
lucca30 wants to merge 3 commits intodevelopfrom
lmartins/wit2-signed-announce
Open

eth/protocols/wit, consensus/bor: WIT2 — BP-signed witness announcements with transitive relay and pre-import serving#2208
lucca30 wants to merge 3 commits intodevelopfrom
lmartins/wit2-signed-announce

Conversation

@lucca30
Copy link
Copy Markdown
Contributor

@lucca30 lucca30 commented Apr 30, 2026

Summary

Adds WIT2 (witness protocol version 3): block producers sign a commitment over each witness, peers verify the signature and relay the announce at network-RTT speed without executing the block, and any peer that has fetched the body can serve it pre-import from an in-memory cache. The slow part of witness propagation — re-execution before relay — is removed from the critical path. Mixed mesh with WIT1 nodes is tolerated; no flag-day rollout required.

Devnet result (4 scenarios, post-fork-only window, hop-chain topology with +300 ms per-hop import knob):

  • Stateless validator at hop 2 — milestone-vote lag p95 425 ms → 1.0 ms (−99.8%).
  • Stateless validator at hop 3 — milestone-vote lag p95 719 ms → 260 ms (−64%).
  • Mixed-version meshes (only stateless WIT2; only one BP WIT2): zero errors, no peer drops, full sample counts. Backward compatible.

What we're solving

Today on Polygon mainnet, witness propagation through a stateless validator that is multiple hops away from a block producer accumulates a per-hop ~500 ms execution gate: each intermediate node must finish executing the block before it will relay the witness downstream. This serialises along the path and shows up at the receiver as milestone-voting latency — slow milestone votes on a fraction of blocks at multi-hop stateless validators. Adding more peers does not help; the chain of dependencies is fan-in × execution time.

The deliverable is to detach announce from execute so witness availability propagates at gossip speed, while keeping the same byte-correctness guarantee (hash check at the requester, with on-chain blame) and the same content-correctness guarantee (state-root, with BP blame).

How the code achieves it

1. BP-signed witness commitment

The producer needs to commit to which witness bytes are correct without paying ~88 ms of single-thread keccak on the announce path (otherwise we re-introduce the same gate we're trying to remove, just on a different node). See Signing-scheme evaluation below — short version: chunked-parallel keccak at 1 MiB chunks beats the next-best viable candidate by a clear margin and keeps the WIT1 wire format intact.

  • core/stateless/witness_commit.go::WitnessCommitHash(bytes) = keccak256(concat(per-1MiB-chunk-keccak)). Each 1 MiB chunk is hashed in parallel; final aggregate is one extra keccak over <1 KiB of chunk hashes. ~13.5 ms wall-clock for 50 MiB witnesses on 8 cores vs ~88 ms single-shot keccak — 6.5× speedup, no wire-format change. Producer and verifier agree on the chunk size as a protocol constant.
  • Producer signs the commitment via consensus/bor.SignBytes reusing the engine's SignerFn, with a dedicated mimetype application/x-bor-wit2-announce and a domain-separated digest tag — replay-resistant at both the digest and signer-call levels.
  • Operator note: validators running Clef must whitelist the new mimetype, otherwise the producer falls back to unsigned WIT1 announces.

2. Verify-and-relay without execution

  • New protocol version WIT2 = 3 (eth/protocols/wit/protocol.go), new message SignedNewWitnessHashesMsg = 0x06 carrying up to 64 announcements per packet.
  • eth/handler_wit2.go::handleSignedWitnessAnnouncements does ecrecover against the scheduled producer for the announced block; on success the announce is cached and immediately relayed to peers that have not seen this hash. No state execution is touched.
  • Header-not-yet-local case is handled as a deferral (no strike, retry on the next packet for the same hash) so the block-cosend race does not punish honest relayers. Strikes (rate-limited disconnect at 5/min) are reserved for confirmed misbehaviour: bad signature, signer ≠ scheduled producer with a known header.

3. Pre-import serving cache

  • pendingWitnessBodies (capacity 10) in the WIT2 handler is fed from the paged-fetch path the moment byte-correctness verification against the BP-signed WitnessHash passes — i.e. before chain write. handleGetWitness consults this cache before chain storage, so a peer that just received the body can serve it to a downstream stateless node before it has finished executing.
  • Entries are gated on the BP-signed WitnessHash being on file — relayers never cache unverified bytes, and WIT1 fallback paths skip the cache entirely (no path to mix unsigned and signed bytes).

4. Blame model preserved

  • Byte-correctness: requester verifies the body against the BP's signed WitnessHash; failure attaches to the server that returned the bytes.
  • Content-correctness (state-root): same as today — failure attaches to the BP that signed.
  • Conflicting WitnessHash for the same BlockHash is rejected via signedWitnessCache.putIfNewer, so a peer cannot equivocate witnesses across announcements.

5. Rate-limits & DoS shape

  • Per-(blockHash, peer) relay rate-limit: 200 ms.
  • Announcement TTL: 30 s.
  • Per-peer token bucket: burst 256, refill 64/s.
  • Strike disconnect at 5 invalid signed announces / minute.

6. Compatibility

  • WIT1 peers continue using NewWitnessHashes. Mixed WIT1/WIT2 mesh is tolerated: WIT2 nodes downgrade to WIT1 wire when peering with WIT1 peers (relay handler skips peers with Version() < wit.WIT2).
  • New WitnessHash field on WitnessMetadataResponse is set by WIT2 servers and ignored by WIT1 readers — wire forward-compatible.

Signing-scheme evaluation

Picking the right commitment function for the announce signature is load-bearing for the whole PR: too slow on the producer and we just move the per-hop gate from "execute the block" to "hash the witness"; too weak and we lose the byte-blame property that lets a downstream node disconnect a peer that returned tampered bytes. Four candidates were evaluated end-to-end on synthetic 1–50 MiB witnesses (Apple M4 Pro, Go 1.26.2, go test -benchtime=3s -count=3, median of three).

Candidates

Candidate Mechanism Wire change
A — current baseline keccak256(canonical_RLP(witness)) single-thread none
B — chunked-parallel keccak keccak(chunk0_hash ‖ … ‖ chunkN_hash), chunks hashed concurrently none
C — per-node Merkle hash every state node, sort, build Merkle tree, sign root none
D — intrinsic (drop separate hash) sign nothing extra; rely on header.StateRoot to detect bad bytes shrinks announce, bumps to WIT3

Result at 50 MiB — verifier wall-clock (best parallel config)

Candidate Best wall-clock Speedup vs A Producer cost Allocs/op Notes
A 88.4 ms (1 thread) 1.0× 88 ms 1 scales 1× even on 12-core hardware
B (1 MiB chunks, 8 cores) 13.5 ms 6.5× 13.5 ms 16 wire identical, only producer's signing input changes
B (15 MiB pages, 4 cores) 26.7 ms 3.3× 27 ms 16 one chunk per wire page; less internal parallelism
C (per-node Merkle, 4 cores) 122 ms 0.7× (worse) ~50 ms (precomputed hashes) 614 k Merkle build over 204k node hashes overwhelms the parallel keccak win
D (intrinsic, 4 cores) 44 ms 2.0× 0 ms 410 k rejected — see below

Why D was rejected post-bench

D had the most attractive numbers (zero producer cost, 2× verifier speedup, no signature on the announce path) — but a peer can serve a truncated witness whose included nodes all hash consistently up to the BP-signed header.StateRoot. Branch nodes embed child references as 32-byte hashes inside their own bytes, so dropping a subtree leaves the parent branch nodes' hashes unchanged. The intrinsic walker has no way to distinguish "this hash-reference belongs to a path that was never touched and is intentionally absent" from "this hash-reference belongs to a path that was touched and was adversarially omitted" — only attempting execution would. That destroys pre-execute byte-blame, which is the whole reason WIT2 introduced a content commitment in the first place. A/B/C all preserve byte-blame because they sign over content; truncation changes the commitment, signature mismatch, peer dropped pre-execute.

Why B at 1 MiB chunks won

A chunk-size sweep at 50 MiB / 8 cores:

chunk size wall-clock aggregate throughput speedup vs A
512 KiB 13.5 ms 3.9 GB/s 6.5×
1 MiB 13.8 ms 3.85 GB/s 6.4×
2 MiB 15.9 ms 3.3 GB/s 5.6×
4 MiB 17.1 ms 3.1 GB/s 5.2×
15 MiB (= one wire page) 27.7 ms 1.9 GB/s 3.2×

512 KiB shaves a tenth of a ms over 1 MiB at the cost of doubling the chunk count and the per-chunk overhead — 1 MiB is the knee of the curve. Below 512 KiB, per-chunk setup starts dominating. The 4 GB/s ceiling is the M4 Pro's aggregate keccak throughput across 8 P-cores; further parallelism doesn't help with the current keccak primitive.

Verifier-side scaling — B beats A non-trivially only ≥ 30 MiB

witness size A (single-thread) B (1 MiB chunks, 8 cores)
1 MiB 1.8 ms 1.8 ms (one chunk, no parallelism)
5 MiB 8.9 ms 4.0 ms (~2.2×)
15 MiB 26.6 ms 6.2 ms (~4.3×)
30 MiB 53.2 ms 9.1 ms (~5.8×)
50 MiB 88.4 ms 13.5 ms (~6.5×)

For the small witnesses Polygon emits today (typically 1–10 MiB) B is comparable to A; for the large witnesses we already see at the upper tail (30–50 MiB) B is the difference between the producer/verifier paying a ~90 ms gate vs ~14 ms. The fix is most impactful exactly where the problem is worst.

Why not C

C is dominated by every other viable candidate on these numbers: slower verifier than A (122 ms vs 88 ms), 91 MiB / 614 k allocations per verify at 50 MiB, no wire saving. C only becomes interesting if a future design needs sub-witness proofs (proving a specific node belongs to the committed set without sending the full body) — that's not on the roadmap, so C is a no-vote here.

Sensitivity caveats

  • Synthetic 256-byte avg node size; real mainnet shape may shift C/D's relative position but not B vs A — B operates on contiguous bytes regardless of node distribution.
  • Producer cost for B reuses the same parallel-keccak; not amortized into anything else, so it's a real wall-clock cost on the announce path. 14 ms is well under the per-hop savings (~500 ms) so the change is a net win even at 50 MiB.
  • A faster hash primitive (BLAKE3 / KangarooTwelve) would push verifier toward ~5 ms but adds a non-Ethereum dependency; out of scope, can be re-evaluated separately if 14 ms is still too slow.

Full bench artifact (raw numbers, reproduction commands, allocation breakdown): agent-zero/investigations/witness-propagation/witness-commit-bench.md.

Local devnet validation

A 9-node hop-chain devnet on kurtosis-pos: 4 BPs full-mesh, two relay full-nodes (F1/F2) carrying a +300 ms per-hop import-delay knob to amplify the gate without heavy tx loads, and three stateless validators at hop distances 1 / 2 / 3 from the closest BP (S1 ↔ BP1, S2 ↔ F1, S3 ↔ F2). Topology was enforced post-launch via admin_removePeer after every node imported past Giugliano (block 128 + 72-block settle), so the measurement window is post-fork and post-prune only — pre-fork blocks (different code path) are excluded.

Four scenarios, ~30 measured blocks each:

# Image map Headline result
1 All 9 = bor:develop (control) Reproduces the bug. S2 milestone-lag p95 425 ms / max 482 ms. S3 p95 719 ms / max 898 ms.
2 All 9 = bor:wit2 S2 p95 1.0 ms (−99.8%). S3 p95 260 ms (−64%). 700–1000 RX_SIGNED_ANNOUNCE per node confirms protocol active end-to-end.
3 S1/S2/S3 = bor:wit2, rest = bor:develop Same lag as develop (no relay path for signed announces because intermediate hops are WIT1), but zero errors, no peer drops, full sample counts; WIT2 stateless nodes correctly downgrade to WIT1 wire and serve develop peers via the chain path.
4 BP1 = bor:wit2, rest = bor:develop Same as scenario 3 from the other direction. BP1 served 195 witnesses on the WIT1 chain path; develop validators kept whitelisting milestones throughout.

F2 import-lag (the relay just before S3) shows the mechanism: median drops 805 → 305 ms in scenario 2 — one full per-hop inject overlapped with WIT2 announcement-driven pre-fetch, exactly what the design predicts.

S3's residual p95 of 260 ms in scenario 2 is the single +300 ms inject on F2 still in the critical path: WIT2 lets the F1 hop overlap, but F2 still has to receive and execute the block before serving S3. Without the artificial knob (i.e., on mainnet), the natural per-hop gate is ~50–100 ms and this residual shrinks proportionally.

Full report (per-scenario logs, lag tables, errors/warnings, peer-count snapshots, prune timestamps, image map): agent-zero/investigations/witness-propagation/devnet-validation-2026-04-30b.md.

Backward compatibility — explicit checks

Property Check Result
WIT2 binary in develop mesh scenario 4 (BP1 only) and scenario 3 (S* only) full sample counts, zero ERRORs, no peer isolation
WIT1 fallback path pendingWitnessBodies skipped when no signed WitnessHash on file gated check in eth/handler_wit2.go::resolveWitnessBytes
Mixed-version metadata new WitnessHash field on WitnessMetadataResponse ignored by WIT1 readers served-chain path serves both equally
Replay across forks/networks mimetype + domain tag in consensus/bor.SignBytes covered by consensus/bor/signbytes_test.go

Test plan

  • Unit tests: core/stateless/witness_commit_test.go, witness_commit_bench_test.go, consensus/bor/signbytes_test.go, eth/handler_wit2_test.go, eth/handler_wit_test.go, eth/peerset_test.go, eth/protocols/wit/protocol_wit2_test.go, eth/fetcher/witness_manager_wit2_test.go.
  • Local devnet validation, 4 scenarios, post-fork-only measurement window — see report above.
  • Reviewers: confirm the chunk-size choice (1 MiB) for the parallel keccak — small enough to be useful on smaller witnesses, large enough to amortise goroutine overhead.
  • Reviewers: confirm the cache capacity (10) for pendingWitnessBodies. We don't expect more than a few in-flight unique witnesses at a time, but worth a second opinion under burst conditions.
  • Operator readiness: comms to validators running Clef about the new mimetype application/x-bor-wit2-announce.

Diffguard / quality-gate notes

  • 3 unused symbols in eth/handler_wit2.go (errInvalidSigner, contextBackground, wit2SpanLookupMissMeter) — left over from earlier iterations; worth removing before merge.
  • eth/handler_wit2.go is 504 lines (4 over the 500 threshold). Up to reviewers whether to split.
  • WitnessCommitHash cognitive complexity is 18 vs the 10 threshold — driven by the parallel-keccak fan-out with bounded goroutines; not naturally simplifiable below ~12 without losing the parallelism. Open to suggestions.

…ncements with transitive relay and pre-import serving

Adds WIT2 (protocol version 3): block producers sign a chunked-parallel
commitment over each witness, peers verify the signature and relay the
announcement at network-RTT speed without execution, and any peer holding
the body can serve it pre-import from an in-memory cache. Byte-correctness
is verified by requesters against the BP-signed WitnessHash, attaching
tampering blame to the server; content-correctness (state-root) failures
attach to the BP. Removes the per-hop ~500 ms execution gate that today
serialises witness propagation through stateless validators.

Witness commitment uses 1 MiB chunked-parallel keccak (keccak256 of the
concatenation of per-chunk hashes), measured at ~13.5 ms wall-clock for
50 MiB witnesses on 8 cores vs ~88 ms single-shot. Wire format and
signature shape are unchanged from a single-keccak commitment; only the
function mapping bytes to the 32-byte commitment changes.

Producer-side signing reuses the engine SignerFn via consensus/bor.SignBytes
with a dedicated mimetype (application/x-bor-wit2-announce) and a
domain-separated digest tag, replay-resistant at both the digest and
signer-call levels. Receivers verify ecrecover against the scheduled
producer for the announced block; announces for blocks whose header is
not yet locally available are deferred (no strike) so the block-cosend
race does not punish honest relayers.

Pre-import serving cache (capacity 10) is fed from the paged-fetch path
the moment byte-correctness check passes, before chain write. Cache
entries are gated on a BP-signed WitnessHash being on file — relayers
never cache unverified bytes, and WIT1 fallback paths skip the cache
entirely. handleGetWitness consults the cache before chain storage.

Wire: new protocol version WIT2 = 3, new message
SignedNewWitnessHashesMsg = 0x06 with up to 64 announcements per packet.
WitnessMetadataResponse extended with WitnessHash. WIT1 peers continue
using NewWitnessHashes; mixed mesh tolerated.

Rate-limits: 200 ms per-(blockHash, peer) relay rate-limit, 30 s announce
TTL, per-peer token bucket (burst 256, refill 64/s), strike disconnect
at 5 invalid signed announces per minute. Conflicting WitnessHash for
the same BlockHash is rejected via signedWitnessCache.putIfNewer.

Operator note: validators running Clef as their signer must whitelist
the mimetype application/x-bor-wit2-announce; without it the producer
falls back to unsigned WIT1 announces.
Copy link
Copy Markdown

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude Code Review

This repository is configured for manual code reviews. Comment @claude review to trigger a review and subscribe this PR to future pushes, or @claude review once for a one-time review.

Tip: disable this comment in your organization's Code Review settings.

@claude
Copy link
Copy Markdown

claude Bot commented Apr 30, 2026

test

@claude
Copy link
Copy Markdown

claude Bot commented Apr 30, 2026

Code Review

3 issues found. Checked for bugs and CLAUDE.md compliance.


1. Performance: redundant witness encoding

File: eth/fetcher/witness_manager.go L670-L680

verifyAgainstSignedHash (line 671) encodes the witness via encodedWitnessHash and computes WitnessCommitHash, then cacheVerifiedWitnessForServing (line 679) repeats the exact same EncodeRLP + WitnessCommitHash. Neither call reuses the other's result.

On a 50 MiB witness this adds ~100–300 ms of redundant CPU work per verified fetch — meaningful given WIT2's goal of eliminating per-hop latency.

Suggested fix: Have verifyAgainstSignedHash return the encoded bytes and hash on success, then pass them directly to parentCacheWitnessForServing instead of re-encoding.


2. Performance: unconditional encode+hash before signed-announcement check

File: eth/handler_wit.go L95-L119

EncodeRLP (line 96–97) and WitnessCommitHash (line 101) run before checking whether a signed announcement exists (line 102). On the WIT1 fallback path (default case, lines 112–118), the encoded bytes and hash are discarded.

Every witness broadcast — including from WIT1 peers — pays the full encode+hash cost (~150–450 ms on 50 MiB witnesses) even when the result is never used.

Suggested fix: Check signedWitnesses.get(hash) first. Only encode+hash when hasSigned is true. This mirrors the pattern already used in cacheVerifiedWitnessForServing in witness_manager.go (lines 697–701), which correctly checks before encoding.


3. Bug: peer dropped on local EncodeRLP failure

File: eth/fetcher/witness_manager.go L725-L730

When encodedWitnessHash fails (line 725–726), the non-empty peer string is passed to handleWitnessFetchFailureExt (line 728), which calls parentDropPeer(peer). An EncodeRLP failure on a successfully-decoded Witness is a local error (the peer delivered valid RLP that decoded fine), not evidence of peer misbehavior.

This is inconsistent with the pattern in cacheVerifiedWitnessForServing just below (line 704–705), which correctly logs the failure without dropping anyone.

Suggested fix: Change peer to "" on line 728:

m.handleWitnessFetchFailureExt(hash, "", fmt.Errorf("witness encode failed: %w", err), false)

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 30, 2026

Codecov Report

❌ Patch coverage is 52.37366% with 311 lines in your changes missing coverage. Please review.
✅ Project coverage is 52.22%. Comparing base (1bf990d) to head (12368a3).

Files with missing lines Patch % Lines
eth/handler_wit2.go 56.25% 99 Missing and 6 partials ⚠️
eth/handler_wit.go 46.52% 75 Missing and 2 partials ⚠️
eth/handler.go 20.68% 23 Missing ⚠️
eth/peer_mock.go 0.00% 20 Missing ⚠️
eth/peerset.go 26.08% 16 Missing and 1 partial ⚠️
eth/protocols/wit/peer.go 11.76% 15 Missing ⚠️
consensus/bor/bor.go 36.84% 10 Missing and 2 partials ⚠️
core/stateless/witness_commit.go 78.57% 10 Missing and 2 partials ⚠️
eth/protocols/wit/handlers.go 0.00% 12 Missing ⚠️
eth/fetcher/witness_manager.go 86.66% 6 Missing and 2 partials ⚠️
... and 3 more

❌ Your patch check has failed because the patch coverage (52.37%) is below the target coverage (90.00%). You can increase the patch coverage or adjust the target coverage.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##           develop    #2208      +/-   ##
===========================================
- Coverage    52.29%   52.22%   -0.07%     
===========================================
  Files          884      886       +2     
  Lines       155571   156147     +576     
===========================================
+ Hits         81355    81548     +193     
- Misses       68989    69356     +367     
- Partials      5227     5243      +16     
Files with missing lines Coverage Δ
accounts/accounts.go 100.00% <ø> (ø)
core/stateless/encoding.go 65.15% <100.00%> (+1.65%) ⬆️
eth/fetcher/block_fetcher.go 73.80% <100.00%> (+0.06%) ⬆️
eth/peer.go 95.80% <ø> (ø)
eth/protocols/wit/handler.go 26.66% <0.00%> (-0.74%) ⬇️
eth/protocols/wit/protocol.go 58.33% <83.33%> (+25.00%) ⬆️
eth/protocols/wit/broadcast.go 29.16% <0.00%> (-9.73%) ⬇️
eth/fetcher/witness_manager.go 88.23% <86.66%> (-2.64%) ⬇️
consensus/bor/bor.go 84.66% <36.84%> (-0.70%) ⬇️
core/stateless/witness_commit.go 78.57% <78.57%> (ø)
... and 7 more

... and 21 files with indirect coverage changes

Files with missing lines Coverage Δ
accounts/accounts.go 100.00% <ø> (ø)
core/stateless/encoding.go 65.15% <100.00%> (+1.65%) ⬆️
eth/fetcher/block_fetcher.go 73.80% <100.00%> (+0.06%) ⬆️
eth/peer.go 95.80% <ø> (ø)
eth/protocols/wit/handler.go 26.66% <0.00%> (-0.74%) ⬇️
eth/protocols/wit/protocol.go 58.33% <83.33%> (+25.00%) ⬆️
eth/protocols/wit/broadcast.go 29.16% <0.00%> (-9.73%) ⬇️
eth/fetcher/witness_manager.go 88.23% <86.66%> (-2.64%) ⬇️
consensus/bor/bor.go 84.66% <36.84%> (-0.70%) ⬇️
core/stateless/witness_commit.go 78.57% <78.57%> (ø)
... and 7 more

... and 21 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

lucca30 added 2 commits April 30, 2026 17:29
- eth/handler_wit2.go: remove unused errInvalidSigner, contextBackground,
  wit2SpanLookupMissMeter, and now-unused context import
- core/stateless/witness_commit_bench_test.go: drop redundant c := c
  loop-var copies (Go 1.22+ copyloopvar)
- goimports formatting on accounts/accounts.go, witness_commit_bench_test.go,
  witness_commit_helpers_test.go, eth/fetcher/witness_manager.go,
  eth/fetcher/witness_manager_wit2_test.go, eth/handler_wit2.go,
  eth/protocols/wit/protocol.go
… drop

- eth/fetcher/witness_manager.go: verifyAgainstSignedHash now returns the
  canonically-encoded body and signed hash on success, so the pre-import
  serving cache no longer re-encodes the same witness (~14 ms saved per
  verified fetch on 50 MiB witnesses). cacheVerifiedWitnessForServing
  takes the precomputed body directly.
- eth/fetcher/witness_manager.go: local EncodeRLP failure inside
  verifyAgainstSignedHash no longer drops the peer — re-encoding bytes
  the peer already delivered as valid RLP is a local invariant violation,
  not peer misbehavior. Mirrors the pattern already used by the cache
  path.
- eth/handler_wit.go: hoist signedWitnesses.get(hash) above the EncodeRLP
  + WitnessCommitHash work in handleBroadcastWitness. WIT1 broadcasts
  (no signed announcement on file) used to pay the full encode+hash cost
  only to discard the result; now they short-circuit.
- eth/fetcher/witness_manager_wit2_test.go: rename + retarget the
  no-signed-hash regression test onto verifyAgainstSignedHash, where the
  invariant now lives.
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
5.3% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant