Parquet: prevent binary offset overflow by stopping batch early#9362
Open
vigneshsiva11 wants to merge 1 commit intoapache:mainfrom
Open
Parquet: prevent binary offset overflow by stopping batch early#9362vigneshsiva11 wants to merge 1 commit intoapache:mainfrom
vigneshsiva11 wants to merge 1 commit intoapache:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This PR fixes a critical bug where reading Parquet files containing very large binary or string values could cause an offset overflow error or panic. The fix moves the overflow check to occur before buffer mutation, ensuring that the internal state remains consistent if an overflow would occur.
Changes:
- Modified
try_pushmethod inOffsetBufferto calculate and validate the next offset before mutating internal buffers - The overflow detection now happens before calling
extend_from_sliceandpush, preventing partial state corruption
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Which issue does this PR close?
Rationale for this change
When reading Parquet files containing very large binary or string values, the Arrow Parquet reader can attempt to construct a RecordBatch whose total value buffer exceeds the maximum representable offset size. This can lead to an overflow error or panic during decoding.
Instead of allowing the buffer to overflow and failing late, the reader should detect this condition early and stop decoding before the offset exceeds the representable limit. This behavior is consistent with other Arrow implementations (for example, PyArrow), which emit smaller batches when encountering very large row groups.
What changes are included in this PR?
Are these changes tested?
Yes.
Note: Some Parquet and Arrow integration tests require external test data provided via git submodules (parquet-testing and testing). These submodules are not present in a minimal local checkout but are initialized in CI.
Are there any user-facing changes?
Yes.
There are no breaking changes to public APIs.