use multi-threaded XZ decompression via liblzma#13
use multi-threaded XZ decompression via liblzma#13bennyz wants to merge 1 commit intojumpstarter-dev:mainfrom
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 40 minutes and 29 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (7)
📝 WalkthroughWalkthroughThe PR replaces the Changes
Sequence DiagramsequenceDiagram
participant dl as Download Loop
participant buf as Buffer<br/>(buffer_rx)
participant decomp as Decompressor<br/>Thread
participant cons as Consumer<br/>Task
participant dest as Destination<br/>(SSD)
dl->>buf: Stream bytes chunks
rect rgba(100, 200, 100, 0.5)
Note over decomp: In-process decompression
buf->>decomp: Read Bytes from buffer_rx
decomp->>decomp: Decode chunk<br/>(liblzma/flate2)
end
decomp->>cons: Send decompressed Vec<u8>
cons->>cons: Detect format &<br/>parse sparse data
cons->>dest: Write to block device
decomp->>decomp: Report progress<br/>via consumed_tx
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/fls/decompress.rs`:
- Around line 127-130: The Compression::Zstd branch in decompress.rs currently
returns a static "install zstdcat" error but get_compression_from_url routes
.zst/.zstd URLs here, so the message is misleading and no external fallback is
used; either implement an external fallback by invoking zstdcat (use
std::process::Command in the function handling decompression, detect absence of
the binary and surface a clear error, stream input/output and propagate exit
status) or update the error text in the Compression::Zstd return to explicitly
state "URL Zstd flashing is unsupported" (and keep a comment referencing
get_compression_from_url), choosing one approach and ensuring the symbol
Compression::Zstd in decompress.rs and the caller used by
get_compression_from_url are updated accordingly.
- Around line 83-90: get_compression_from_url currently takes the full URL
including query strings/fragments, so filenames like
"image.xz?X-Amz-Signature=..." yield Compression::None; update
get_compression_from_url to first strip any query string or fragment by
splitting the input on '?' and '#' and using the part before them, then extract
the extension (rsplit('.') etc.) and match to Compression (Gzip, Xz, Zstd, None)
as before; this ensures signed URLs and URLs with fragments are detected
correctly.
- Around line 117-123: The MT XZ decoder currently disables memory limits by
calling memlimit_threading(u64::MAX) and memlimit_stop(u64::MAX) on
liblzma::stream::MtStreamBuilder, which is unsafe for untrusted firmware; change
this to use a conservative bounded limit instead (e.g., a constant
DEFAULT_XZ_MEMLIMIT) or read a configurable limit from BlockFlashOptions (add an
xz_memlimit or xz_max_memory field), validate it, and pass that value into
memlimit_threading(...) and memlimit_stop(...); leave the rest of the
MtStreamBuilder/decoder creation and the XzDecoder::new_stream(channel_reader,
stream) call intact, and ensure the option has a sensible default so callers
that don’t opt-in get safe behavior.
In `@src/fls/oci/from_oci.rs`:
- Line 16: The OCI tar-entry XZ extraction paths still instantiate
single-threaded decoders with XzDecoder::new(); replace those constructions with
the multi-threaded builder pattern used in src/fls/decompress.rs by importing
liblzma::MtStreamBuilder and creating decoders via MtStreamBuilder::new_stream()
(mirroring the same options/unwrap/error handling used in decompress.rs),
updating the two places that call XzDecoder::new() to use the
MtStreamBuilder::new_stream() stream wrapper instead so OCI extraction uses the
MT decoder.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d5b19a2c-d622-443f-9e43-05306f3326b8
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (7)
Cargo.tomlsrc/fls/decompress.rssrc/fls/from_url.rssrc/fls/magic_bytes.rssrc/fls/oci/from_oci.rssrc/fls/stream_utils.rstests/common/mod.rs
Replace xz2 crate with liblzma 0.4 (parallel feature) to enable lzma_stream_decoder_mt() This helps remove the decompression bottleneck from XZ decompression speeding flashing up by ~4x Signed-off-by: Benny Zlotnik <bzlotnik@redhat.com> Assisted-by: claude-opus-4.6
Replace xz2 crate with liblzma 0.4 (parallel feature) to enable
lzma_stream_decoder_mt()
This helps remove the decompression bottleneck from XZ decompression
speeding flashing up by ~4x