WIP perf changes from Garand's branch#5238
Draft
dmkozh wants to merge 75 commits intostellar:masterfrom
Draft
Conversation
In max-sac-tps mode, programmatically set PARALLEL_LEDGER_APPLY=true, RUN_STANDALONE=false, and MODE_AUTO_STARTS_OVERLAY=false so background apply is always active. Guard against in-memory SQLite since parallelLedgerClose() requires a file-based database. Move RUN_STANDALONE=true into the else branch for other modes. Also fix config key typo in the example config (APPLY_LOAD_MAX_SAC_TPS_TARGET_CLOSE_TIME_MS → APPLY_LOAD_TARGET_CLOSE_TIME_MS) and remove RUN_STANDALONE / PARALLEL_LEDGER_APPLY from the config file since they are now set programmatically.
Add skills for running the max-sac-tps benchmark with Tracy profiling and for the autonomous TPS optimization loop. Update the Tracy analysis skill with benchmark-specific guidance (ignore TX set building zones). Enable in-memory BucketList for max-sac-tps mode by setting BUCKETLIST_DB_INDEX_PAGE_SIZE_EXPONENT=0 in CommandLine.cpp, and set APPLY_LOAD_BATCH_SAC_COUNT=1 in the example config.
The binary search uses t-statistic to ensure the necessary confidence at each step. This results in needing less samples far away from the true value and more samples around it. We still need at least 30 samples though to keep math simple. This works reasonably well, but still doesn't avoid some fundamental variance issues that we have that result in very different and statistically significant performance difference between different runs. That concern should hopefully be addressed separately. Also fix some SAC max TPS benchmark issues, the math got messed up after I've updated it from 1s granularity to 50ms.
…op setup - Narrow binary search from 1000-15000 to 7000-12000 TPS - Lower minSamples from 30 to 10 for faster t-statistic early exit - Lower APPLY_LOAD_NUM_LEDGERS from 20 to 10 - Add ralph-prompt.md for open-ralph-wiggum optimization loop - Add how-to-run.md with headless execution instructions - Update skills to reflect new benchmark parameters
Lower confidence from 0.99 to 0.95, add xTolerance=2 to avoid narrowing to a single step, reduce APPLY_LOAD_NUM_LEDGERS minimum from 30 to 10, disable account cache warming and entry cache for the benchmark. Together these reduce total benchmark time significantly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Default CXXSTDLIB to stdc++ in Makefile.am to prevent Rust/Tracy linker failures when no explicit -stdlib= flag is passed. Update skill docs to use standardized configure: clang-20, libc++, --disable-postgres (avoids spinning up temporary PostgreSQL clusters during make check).
Replace single global mutex + RandomEvictionCache with 16 sharded caches, each with its own mutex. This eliminates contention when 4 parallel threads verify signatures simultaneously. Also use maybeGet() instead of exists()+get() double-lookup, fix ZoneText string heap allocations, make counters atomic, and remove unused liveSnapshot copy in applySorobanStageClustersInParallel.
…mization Budget::charge() unchecked indexing and inlined cost model evaluation provided 0% TPS improvement. LLVM already eliminates bounds checks for enum-indexed arrays, and Result<> error propagation has near-zero overhead when the error path is never taken.
…th (saves ~56ms/ledger) Launch updateInMemorySorobanState on an async worker thread while the main thread runs addLiveBatch concurrently. These operate on independent data structures and share only const references to the entry vectors. Tracy confirms the 56ms in-memory update is fully overlapped with the 119ms addLiveBatch, reducing finalizeLedgerTxnChanges from ~220ms to ~164ms. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…5ms/ledger) Run LiveBucketIndex construction on async worker thread in parallel with the put loop in mergeInMemory. Both read mergedEntries as const — fully independent. Tracy confirms full overlap: index future wait averages 2.2µs. finalizeLedgerTxnChanges drops from 164ms to 136ms per ledger. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…tion (saves ~500ms/trace) getSize() recomputed xdr::xdr_size(mEnvelope) on every call with zero caching. With 2.5M+ calls per trace at 273ns each, this was 694ms of self-time. Cache the result on first call since the envelope is const after construction. Tracy confirms 72% reduction: 694ms → 195ms (273ns → 75ns per call). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
libsodium uses a portable C SHA256 implementation, missing SHA-NI hardware instructions available on Intel Xeon Platinum. OpenSSL automatically uses SHA-NI, providing 4.6x speedup for streaming add() (893ns->193ns/call) and 56% total SHA256 self-time reduction (3,744ms->1,659ms per 30s trace). Use opaque aligned storage for SHA256_CTX in the header to avoid naming conflict between OpenSSL's ::SHA256 function and stellar::SHA256 class. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Restructure applySorobanStageClustersInParallel to pre-compute readWriteKeysForStage and commit each thread's changes as its future resolves, overlapping ~47ms/stage of serial commit work with thread execution. Poll futures in any-ready order rather than sequential index order to avoid blocking on a slow thread while faster threads are ready to commit. TPS: 9,408 → 10,688. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…3.4%) Add DISABLE_META_TRACKING_FOR_TESTING config flag to skip non-production meta overhead in the max-sac-tps benchmark. When enabled, skips: - BUILD_TESTS ledgerCloseMeta creation (when no meta stream is active) - Forced enableTxMeta=true override - Per-tx mLastLedgerTxMeta deep copies (10.6K/ledger) - Bulk mLastLedgerCloseMeta deep copy Makes benchmark representative of production validators which do not stream meta. Tracy analysis shows 50.7ms/ledger savings (3.41%). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When ledgerCloseMeta is null (meta tracking disabled), operate directly on the parent LTX in processFeesSeqNums and processPostTxSetApply instead of creating a child LTX per-transaction. The child LTX was only needed for getChanges() meta tracking. Saves ~41ms/ledger from eliminating ~10.6K child LTX create/commit cycles. Combined with experiment 011 (meta tracking), TPS improves from 10,688 to 12,736 (+19.2%). Also raises APPLY_LOAD_MAX_SAC_TPS_MAX_TPS from 12000 to 15000. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Skip building LedgerTxnDelta in setEffectsDeltaFromSuccessfulTx when INVARIANT_CHECKS is empty. The delta is consumed exclusively by checkOnOperationApply which iterates an empty list when no invariants are configured. This eliminates ~285ms of shared_ptr allocations and entry copies across 4 worker threads per ledger. Benchmark: 12,736 -> 13,760 TPS (+1,024 TPS, +8.0%)
Sort lightweight 24-byte EntryRef structs (type tag + pointer) instead of full BucketEntry objects (200-500 bytes) in convertToBucketEntry. Reduces sort swap cost by ~12x and materializes final vector in one cache-friendly sequential pass. Cuts convertToBucketEntry from 31.9ms to 25.4ms per ledger. Benchmark: 13,760 -> 14,144 TPS (+384 TPS, +2.8%)
Prevent accidentally tracking tracy profiles, rustup-init, and other large build/profiling outputs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When transaction meta tracking is disabled, the child LedgerTxn in commonPreApply serves no purpose for Soroban TXs during apply. Add a fast path that operates directly on the parent LTX, skipping child LTX/snapshot/validation/signature overhead. Reduces per-TX cost from 7.5us to 1.75us (-77%), saving ~93ms/ledger in the sequential pre-parallel setup. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove Tracy ZoneScoped from 6 high-frequency trivial functions where instrumentation overhead dominated actual work: getFullHash (6.1M calls), getContentsHash (242K), getSize (2.5M), computePreApplySorobanResourceFee (242K), SHA256::add (2.2M), sha256 (1.5M). Saves ~21ms/ledger in applyLedger and reduces Tracy trace size by 20%. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Share a single LedgerTxnHeader between refundSorobanFee and the V23 event stage check in processRefund, eliminating 16K redundant header activate/deactivate cycles per ledger. Use move semantics for TransactionResultPair in the non-meta path of processResultAndMeta. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
In commitChangesToLedgerTxn, determining whether an entry is INIT (new) vs LIVE (existing) required calling mInMemorySorobanState.get() which computes sha256(xdr_to_opaque(key)) for every CONTRACT_DATA entry. With ~40K entries per ledger, this added ~16ms of SHA256 per ledger. Track existence via a bool mIsNew flag in ParallelApplyEntry, set when a TX creates an entry that didn't previously exist. This replaces the expensive SHA256-based existence check with a simple boolean. commitChangesToLedgerTxn: 72.6ms -> 44.2ms (-39%) TPS: 16,640 -> 16,960 (+1.9%) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add move overloads for createWithoutLoading/updateWithoutLoading and ScopedLedgerEntryOpt::moveFromScope to eliminate two deep copies per entry when committing parallel apply state to LedgerTxn. Reduces commitChangesToLedgerTxn from 44ms to 39ms per ledger (-12.8%). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Deactivate the LedgerTxnHeader in applyLedger before calling processFeesSeqNums, then conditionally skip the child LedgerTxn when ledgerCloseMeta is null. This eliminates child LTX creation, the commit overhead (4.5ms copying ~17K entries), and per-account load traversal through the child-parent chain. processFeesSeqNums: 66.8ms → 60.4ms/ledger (-9.6%) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Pre-load Soroban read-only entries (contract instance, code, TTL) into the global parallel apply state during setup, so per-TX lookups hit thread-local maps instead of traversing to InMemorySorobanState. Also cache protocol version and skip Soroban merge tracking in processFeesSeqNums, and use std::move for mLatestTxResultSet. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace two UnorderedSet<LedgerKey> instances in recordStorageChanges with lightweight alternatives: 1. createdAndModifiedKeys → uint64_t bitfield tracking RW footprint coverage (with vector<bool> fallback for >64 keys). Eliminates 192K LedgerKey hash computations per ledger (xdrComputeHash + SipHash + RandHasher assert). 2. createdKeys → counter-based verification (numCreatedSorobanEntries == numCreatedTTLEntries). Eliminates 64K getTTLKey calls per ledger (SHA-256 + XDR serialization) in the verification loop. Tracy results: recordStorageChanges self-time dropped 76% (235ms → 56.5ms), applyLedger improved 2.3% (1019ms → 996ms per ledger). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The local reset_for_new_tx commits on p21-p24 were never pushed upstream, breaking builds on other machines. Revert those submodules to their upstream commits and add no-op stubs in soroban_proto_all.rs. Point p25 .gitmodules at SirTyson/rs-soroban-env where the local commits have been pushed to branch oh-my-opencode-test. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Same issue as p21-p24: local reset_for_new_tx commit was never pushed. Revert to upstream and make the p26 stub a no-op. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The p25 soroban-env fork (SirTyson/rs-soroban-env, branch oh-my-opencode-test) includes budget metering optimizations that skip `metered_write_xdr` calls for old entries in `get_ledger_changes`. The relevant p25 fork commits are: - e8386616: cache initial entry XDR sizes to avoid re-serializing old entries (first commit to cross the fee-rounding threshold) - 76e80fc5: further metering refinements in get_ledger_changes - 388b1859: initial metering optimization (below rounding threshold) These commits use `#[cfg(not(any(test, feature = "recording_mode")))]` to gate the production optimization path. However, stellar-core test builds use `--features testutils` (not Rust's `#[cfg(test)]`), so the production path is active during tests. The reduced CPU budget consumption changes fee refunds, which changes account balances, which changes the bucketListHash, which changes the ledger hash. Since ApplyTxSorter XORs each tx's fullHash with the txSetHash (which includes previousLedgerHash), the soroban parallel phase execution order changes. All changes are safe -- verified that: - Classic files: only hash fields changed (previousLedgerHash, hash, txSetHash, signature, bucketListHash) - Soroban files: same 5 transactions with identical envelopes, just reordered by ApplyTxSorter. Fee amounts differ due to the metering changes. Some transactions that previously succeeded now fail with ENTRY_ARCHIVED due to different execution order and prior-ledger state, and some that failed with TRAPPED/RESOURCE_LIMIT_EXCEEDED now fail with ENTRY_ARCHIVED instead (hitting the archived-entry check before the budget/trap error). No actual TTL values, contract data, or execution logic changed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1. Budget cache accumulates charges across TXs for protocols < p25 The thread-local Budget cache in soroban_proto_any.rs reuses a Budget object across transactions via clone(), which only copies the Rc pointer to the shared BudgetImpl. For p25, reset_budget_for_new_tx() properly resets the counters. For p21-p24 and p26, it was a no-op, so charges from previous TXs accumulated, eventually causing Budget ExceededLimit errors. This broke modifySorobanNetworkConfig (used by many tests) because the 3rd TX in the upgrade sequence would fail. Fix: return bool from reset_budget_for_new_tx (true = reset succeeded). When it returns false, skip the cache and create a fresh Budget. 2. Pre-loaded RO TTL entries silently dropped during parallel apply The Soroban RO entry pre-loading optimization populates the global entry map with mIsDirty=false. When maybeMergeRoTTLBumps merges a thread's TTL bump into a pre-loaded entry, it updates the TTL value in-place but never sets mIsDirty=true, so commitChangesToLedgerTxn skips the entry entirely. Additionally, lastModifiedLedgerSeq was not propagated during the merge, causing stale metadata in subsequent stages. Fix: set mIsDirty=true after successful merge in commitChangeFromThread; propagate lastModifiedLedgerSeq in maybeMergeRoTTLBumps. 3. InMemoryLedgerTxn missing move overloads for createWithoutLoading The new InternalLedgerEntry&& overloads of createWithoutLoading and updateWithoutLoading were added to LedgerTxn but not overridden in InMemoryLedgerTxn. When called with a LedgerEntry temporary, the move overload was selected via implicit conversion, bypassing InMemoryLedgerTxn's updateLedgerKeyMap() that tracks offers for SQL. Fix: add move overloads to InMemoryLedgerTxn that extract the key before forwarding via std::move. Regenerate protocol-25 ledger close meta golden files to reflect the corrected TTL bump behavior. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Cache the mapping from Soroban data/code keys to their TTL keys in a per-cluster UnorderedMap during collectClusterFootprintEntriesFromGlobal. This eliminates redundant SHA-256 + XDR serialization in buildRoTTLSet (called per-TX), flushRoTTLBumpsInTxWriteFootprint, and the init path. Also replace the per-TX UnorderedSet<LedgerKey> allocation in buildRoTTLSet with a direct linear scan of the TX's RO footprint using cached TTL key lookups (2-4 entries for SAC transfers). TPS impact: within noise (~0%), but eliminates ~170K+ redundant SHA-256 computations per stage. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace erase+emplace pattern in updateContractDataTTL and updateContractData with in-place mutation through unordered_set's shallow const semantics. Eliminates ~54K SHA-256 recomputations and memory allocation cycles per ledger, reducing updateState self-time by 13% (88.7ms → 77.2ms). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Pre-compute expected entry counts from footprint sizes and call reserve() on ParallelApplyEntryMap containers before they accumulate entries. Eliminates log2(N) rehash operations during parallel apply, yielding -26% commitChangesFromThread and -27% commitChangesToLedgerTxn self-time. +576 TPS (+3.1%): 18,368 → 18,944 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Pre-serialize CPU and memory cost params once per ledger in ParallelLedgerInfo instead of re-serializing via xdr_to_opaque() for every TX (~64K times). Reduces "serialize inputs" zone time by 46% (217ms -> 117ms) and applyLedger by 2.1% (987ms -> 966ms). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use the pre-computed TTL key cache from ThreadParallelApplyLedgerState in addReads instead of recomputing getTTLKey (XDR serialize + SHA-256) for every soroban footprint entry. Reduces addReads self-time by 15% (-37ms across 128K calls). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Cache the uint256 TTL key hash (SHA-256 of XDR-serialized LedgerKey) in ValueEntry at construction time instead of recomputing it on every copyKey()/hash()/equality call. This eliminates repeated SHA-256 + XDR serialize operations during hash table lookups in InMemorySorobanState, reducing updateState self-time by 18% (-14ms/ledger). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
resolveBackgroundEvictionScan previously received an UnorderedSet<LedgerKey> built by getAllKeysWithoutSealing() containing ~128K entries (~20ms to build), but only performed ~10-100 lookups. Added isModifiedKey() to LedgerTxn for direct O(1) lookups in the existing EntryMap, eliminating the set construction. resolveEviction zone: 20ms -> 0.116ms per ledger (99.4% reduction). TPS: 18,944 -> 19,328 avg (+2.0%). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace unordered_set<InternalInMemoryBucketEntry> with unordered_map<LedgerKey, IndexPtrT> in InMemoryBucketState. Eliminates ~23K heap allocations per ledger and all virtual dispatch in the scan() hot path. +384 TPS (+2.0%).
Eliminate duplicate Persistent storage reads in spend_balance and receive_balance for Contract addresses. The fused path reads the balance entry once and checks the authorized flag inline, saving ~128K redundant storage reads per ledger (64K transfers x 2). Benchmark: 19,264 -> 19,520 TPS (+1.3%) Tracy trace: max-sac-tps-068.tracy Also includes fail doc for experiment 067 (blocking wait in parallel apply).
Remove tracy_span! from visit_obj_untyped (7.87M calls/ledger), metered_map::get (2.96M calls/ledger), add_host_object (~200K calls), and all env function zones via vmcaller_env macro (~1M+ calls). Benchmark: 19,712 TPS vs 19,520 baseline (+1.0%)
…n recordStorageChanges Track which read-write footprint keys had existing entries during addReads using a bitfield (uint64_t for <=64 keys, vector<bool> fallback). In recordStorageChanges, entries matching a known-existing RW key use upsertLedgerEntryKnownExisting, skipping the getLiveEntryOpt existence check that traverses mTxEntryMap -> mThreadState -> InMemorySorobanState. For SAC transfers, this redirects ~125K calls (38% of upserts) from the 2.41us/call path to a 0.71us/call path, saving ~212ms per 60s window. Result: 19,840 TPS (+4.7% over 18,944 baseline, new record)
update p25 env submodule script update config fix config fix 2 TxGenerator update more fixes bridge fixes more fixes
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.