-
Notifications
You must be signed in to change notification settings - Fork 1.1k
History archive hardening #5185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -80,10 +80,28 @@ struct HistoryArchiveState | |
| static constexpr size_t MAX_HISTORY_ARCHIVE_BUCKET_SIZE = | ||
| 1024ull * 1024ull * 1024ull * 100ull; // 100 GB | ||
|
|
||
| static inline unsigned const | ||
| HISTORY_ARCHIVE_STATE_VERSION_BEFORE_HOT_ARCHIVE = 1; | ||
| static inline unsigned const | ||
| HISTORY_ARCHIVE_STATE_VERSION_WITH_HOT_ARCHIVE = 2; | ||
| // Maximum allowed size for a History Archive State (HAS) JSON | ||
| // file or string. This applies only to the HAS JSON document itself | ||
| // (stellar-history.json), not to bucket files or other archive | ||
| // contents. Checked before parsing to prevent OOM from crafted JSON | ||
| // with oversized arrays or strings. A valid HAS is typically under | ||
| // 50KB; 10MB is extremely generous. | ||
| static constexpr size_t MAX_HAS_FILE_SIZE = 10 * 1024 * 1024; // 10 MB | ||
|
|
||
| // Upper bound on currentLedger to prevent uint32_t overflow in | ||
| // downstream arithmetic. Callers round currentLedger up to the next | ||
| // checkpoint boundary and then step one checkpoint further, so we add up to | ||
| // 128 to this number. We'll never hit this in practice, so use 256 for a | ||
| // little wiggle room so we don't overflow. | ||
| static constexpr uint32_t MAX_CURRENT_LEDGER = | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. how did you arrive at this number? has.currentLedger = max should be a valid HAS
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Later on in the call stack we round this value up to the next checkpoint boundary, then later on move that value up another checkpoint, so This would take about 81 years to hit after switching to 600 ms ledgers, so I think we're safe. At least, it's not my problem. Making a note here for the AIs to fix in 2085.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i think this is the wrong place to solve this issue. we should just use saturated adds, and return work failure if we hit overflow in respective works. hard-coding 256 here is pretty awkward, and is not future-proof (implementation changes all the time) |
||
| std::numeric_limits<uint32_t>::max() - 256; | ||
|
|
||
|
SirTyson marked this conversation as resolved.
|
||
| enum Version : unsigned | ||
| { | ||
| HISTORY_ARCHIVE_STATE_VERSION_BEFORE_HOT_ARCHIVE = 1, | ||
| HISTORY_ARCHIVE_STATE_VERSION_WITH_HOT_ARCHIVE = 2, | ||
| HISTORY_ARCHIVE_STATE_VERSION_LAST_ITEM | ||
| }; | ||
|
|
||
| struct BucketHashReturnT | ||
| { | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you sure this is correct in all protocols?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep. I just downloaded a couple of very early history files to double check, still the bucket level constant.