Skip to content

History archive hardening#5185

Open
SirTyson wants to merge 4 commits intostellar:masterfrom
SirTyson:history-archive-hardening
Open

History archive hardening#5185
SirTyson wants to merge 4 commits intostellar:masterfrom
SirTyson:history-archive-hardening

Conversation

@SirTyson
Copy link
Copy Markdown
Contributor

Description

Hardens the processing of History Archive files during catchup. Basically, we do a lot more error checking to gracefully handle malformed history archive files instead of crashing. Addressed the following issues:

https://github.com/stellar/stellar-core-internal/issues/527
https://github.com/stellar/stellar-core-internal/issues/520
https://github.com/stellar/stellar-core-internal/issues/464
https://github.com/stellar/stellar-core-internal/issues/461

None of these are particularly exploitable, as they require a malicious tier 1 and can only momentarily stall nodes catching up to the network for the first time. But they're good to fix so we can stop getting bug bounty/AI reports on them.

Checklist

  • Reviewed the contributing document
  • Rebased on top of master (no merge commits)
  • Ran clang-format v8.0.0 (via make format or the Visual Studio extension)
  • Compiles
  • Ran all tests
  • If change impacts performance, include supporting evidence per the performance document

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Hardens History Archive State (HAS) and related archive-file processing during catchup/publish flows by adding tighter validation and replacing aborting asserts with exceptions, so malformed or crafted archive inputs fail gracefully instead of crashing Stellar Core.

Changes:

  • Add HAS JSON size limits and post-deserialization validation (version, bucket vector sizes, ledger bounds, hex-hash format).
  • Harden deserialization of FutureBucket (required fields, shadow-hash cap) and fs::hexDir (throw instead of abort).
  • Add concurrency annotations/locking for publish enqueue timing, plus comprehensive HAS format-validation tests.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/util/Fs.cpp hexDir now throws on invalid hex input rather than aborting.
src/historywork/VerifyBucketWork.cpp Handle synchronous verifier completion to return success/failure immediately.
src/history/test/HistoryArchiveFormatTests.cpp New test suite covering malformed/crafted HAS JSON inputs and hexDir behavior.
src/history/HistoryManagerImpl.h Add annotated mutex guarding enqueue-time map.
src/history/HistoryManagerImpl.cpp Lock around enqueue-time map updates/reads.
src/history/HistoryArchive.h Introduce HAS size and ledger upper-bound constants.
src/history/HistoryArchive.cpp Enforce HAS size limits and validate HAS contents after JSON deserialization.
src/bucket/FutureBucket.h Add shadow-hash count cap and required-field checks during deserialization.

Comment thread src/history/HistoryArchive.cpp
Comment thread src/history/HistoryArchive.h
@SirTyson SirTyson force-pushed the history-archive-hardening branch from 9c24cbe to 15e5d28 Compare March 19, 2026 20:20

if (has.currentBuckets.size() != LiveBucketList::kNumLevels)
{
throw std::runtime_error(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you sure this is correct in all protocols?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep. I just downloaded a couple of very early history files to double check, still the bucket level constant.

Comment thread src/history/HistoryArchive.cpp Outdated
static void
validateHASAfterDeserialization(HistoryArchiveState const& has)
{
if (has.version != HistoryArchiveState::
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems a bit like a footgun - as soon as we bump the version, this validation will break. You can just create an enum and enforce that has.version is less than kLastItem


// Upper bound on currentLedger to prevent uint32_t overflow in
// downstream arithmetic.
static constexpr uint32_t MAX_CURRENT_LEDGER =
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how did you arrive at this number? has.currentLedger = max should be a valid HAS

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Later on in the call stack we round this value up to the next checkpoint boundary, then later on move that value up another checkpoint, so has.CurrentLedger + 128. This would cause an overflow if we allowed actual uint32_t max. I subtracted 256 just for some buffer, I'll add a comment.

This would take about 81 years to hit after switching to 600 ms ledgers, so I think we're safe. At least, it's not my problem. Making a note here for the AIs to fix in 2085.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this is the wrong place to solve this issue. we should just use saturated adds, and return work failure if we hit overflow in respective works. hard-coding 256 here is pretty awkward, and is not future-proof (implementation changes all the time)

@SirTyson SirTyson force-pushed the history-archive-hardening branch from 15e5d28 to 47d9856 Compare April 17, 2026 23:52
@SirTyson SirTyson force-pushed the history-archive-hardening branch from 47d9856 to 3c56113 Compare April 17, 2026 23:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants