History archive hardening#5185
Conversation
There was a problem hiding this comment.
Pull request overview
Hardens History Archive State (HAS) and related archive-file processing during catchup/publish flows by adding tighter validation and replacing aborting asserts with exceptions, so malformed or crafted archive inputs fail gracefully instead of crashing Stellar Core.
Changes:
- Add HAS JSON size limits and post-deserialization validation (version, bucket vector sizes, ledger bounds, hex-hash format).
- Harden deserialization of
FutureBucket(required fields, shadow-hash cap) andfs::hexDir(throw instead of abort). - Add concurrency annotations/locking for publish enqueue timing, plus comprehensive HAS format-validation tests.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| src/util/Fs.cpp | hexDir now throws on invalid hex input rather than aborting. |
| src/historywork/VerifyBucketWork.cpp | Handle synchronous verifier completion to return success/failure immediately. |
| src/history/test/HistoryArchiveFormatTests.cpp | New test suite covering malformed/crafted HAS JSON inputs and hexDir behavior. |
| src/history/HistoryManagerImpl.h | Add annotated mutex guarding enqueue-time map. |
| src/history/HistoryManagerImpl.cpp | Lock around enqueue-time map updates/reads. |
| src/history/HistoryArchive.h | Introduce HAS size and ledger upper-bound constants. |
| src/history/HistoryArchive.cpp | Enforce HAS size limits and validate HAS contents after JSON deserialization. |
| src/bucket/FutureBucket.h | Add shadow-hash count cap and required-field checks during deserialization. |
9c24cbe to
15e5d28
Compare
|
|
||
| if (has.currentBuckets.size() != LiveBucketList::kNumLevels) | ||
| { | ||
| throw std::runtime_error( |
There was a problem hiding this comment.
are you sure this is correct in all protocols?
There was a problem hiding this comment.
Yep. I just downloaded a couple of very early history files to double check, still the bucket level constant.
| static void | ||
| validateHASAfterDeserialization(HistoryArchiveState const& has) | ||
| { | ||
| if (has.version != HistoryArchiveState:: |
There was a problem hiding this comment.
seems a bit like a footgun - as soon as we bump the version, this validation will break. You can just create an enum and enforce that has.version is less than kLastItem
|
|
||
| // Upper bound on currentLedger to prevent uint32_t overflow in | ||
| // downstream arithmetic. | ||
| static constexpr uint32_t MAX_CURRENT_LEDGER = |
There was a problem hiding this comment.
how did you arrive at this number? has.currentLedger = max should be a valid HAS
There was a problem hiding this comment.
Later on in the call stack we round this value up to the next checkpoint boundary, then later on move that value up another checkpoint, so has.CurrentLedger + 128. This would cause an overflow if we allowed actual uint32_t max. I subtracted 256 just for some buffer, I'll add a comment.
This would take about 81 years to hit after switching to 600 ms ledgers, so I think we're safe. At least, it's not my problem. Making a note here for the AIs to fix in 2085.
There was a problem hiding this comment.
i think this is the wrong place to solve this issue. we should just use saturated adds, and return work failure if we hit overflow in respective works. hard-coding 256 here is pretty awkward, and is not future-proof (implementation changes all the time)
15e5d28 to
47d9856
Compare
47d9856 to
3c56113
Compare
Description
Hardens the processing of History Archive files during catchup. Basically, we do a lot more error checking to gracefully handle malformed history archive files instead of crashing. Addressed the following issues:
https://github.com/stellar/stellar-core-internal/issues/527
https://github.com/stellar/stellar-core-internal/issues/520
https://github.com/stellar/stellar-core-internal/issues/464
https://github.com/stellar/stellar-core-internal/issues/461
None of these are particularly exploitable, as they require a malicious tier 1 and can only momentarily stall nodes catching up to the network for the first time. But they're good to fix so we can stop getting bug bounty/AI reports on them.
Checklist
clang-formatv8.0.0 (viamake formator the Visual Studio extension)