Implement reverse performance optimization#4775
Implement reverse performance optimization#4775ahkcs wants to merge 22 commits intoopensearch-project:mainfrom
reverse performance optimization#4775Conversation
4483045 to
0246535
Compare
There was a problem hiding this comment.
QQ: I recall the major comment on original PR is early optimization in analyzer layer. Is this new PR trying to address the concern? Ref: #4056 (comment)
Hi Chen, I think that's a valid concern. However, after trying it out, I think it has significant complexity comparing to the current approach. I think |
noCharger
left a comment
There was a problem hiding this comment.
Can you add benchmark results on before VS after?
This commit optimizes the `reverse` command in the Calcite planner by intelligently reversing existing sort collations instead of always using the ROW_NUMBER() approach. Key changes: - Added PlanUtils.reverseCollation() method to flip sort directions and null directions - Updated CalciteRelNodeVisitor.visitReverse() to: - Check for existing sort collations - Reverse them if present (more efficient) - Fall back to ROW_NUMBER() when no sort exists - Added comprehensive integration test expected outputs for: - Single field reverse pushdown - Multiple field reverse pushdown - Reverse fallback cases - Double reverse no-op optimizations This optimization significantly improves performance when reversing already-sorted data by leveraging database-native sort reversal. Based on PR opensearch-project#4056 by @selsong Signed-off-by: Kai Huang <ahkcs@amazon.com> # Conflicts: # core/src/main/java/org/opensearch/sql/calcite/utils/PlanUtils.java
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
dd9f4d6 to
74d5997
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java`:
- Around line 689-816: backtrackForCollation is allowing traversal through
limit-like Sorts (LogicalSystemLimit and Sorts with non-null fetch/offset), so
insertReversedSortInTree ends up placing the reversed sort under the limit and
changing semantics; update backtrackForCollation to treat LogicalSystemLimit and
org.apache.calcite.rel.core.Sort nodes that have non-null fetch/offset as
blocking operators (return null when encountered) and do not traverse past them,
and then adjust visitReverse to, when a limit/barrier is encountered, obtain the
barrier's collation (from the limit/sort itself), compute the reversed collation
and apply the reversed sort on top of that barrier (i.e., wrap the Limit/Sort
with the reversed sort using context.relBuilder.sort or by creating a
LogicalSort above the visited limit) instead of calling insertReversedSortInTree
to insert below the barrier; keep references to backtrackForCollation,
LogicalSystemLimit, org.apache.calcite.rel.core.Sort (fetch/offset),
insertReversedSortInTree, and visitReverse to locate the changes.
In `@docs/user/ppl/cmd/reverse.md`:
- Line 6: Update the warning for the reverse command to clarify its optimized
behavior: state that reverse may be pushed down or be a no-op when a reversible
collation exists, and that the heavy full-dataset materialization and high
coordinating-node memory usage only occur when no reversible collation is
available and pushdown cannot be applied; mention the terms "reverse",
"reversible collation", "pushdown", and "no-op" so readers can relate this to
the optimization section.
- Around line 209-218: Example 8 in the reverse command docs uses
source=time_test_data which is undocumented; update the example to use the
documented sample index by replacing source=time_test_data with source=accounts
in the Example 8 block (the code block that begins with "source=time_test_data"
and includes the reverse and head 5 pipeline) so it matches the other examples
that use the accounts sample index.
🧹 Nitpick comments (1)
integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java (1)
359-373: Consider asserting order since an explicit sort makes it deterministic.
UsingverifyDataRows(unordered) doesn’t validate the reverse ordering. With a post-aggregation sort, order should be stable; considerverifyDataRowsInOrder(or add a deterministic tiebreaker).💡 Suggested test assertion change
- verifyDataRows(result, rows(4, "M"), rows(3, "F")); + verifyDataRowsInOrder(result, rows(4, "M"), rows(3, "F"));
core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java
Show resolved
Hide resolved
| @@ -5,6 +5,16 @@ The `reverse` command reverses the display order of the search results. It retur | |||
|
|
|||
| > **Note**: The `reverse` command processes the entire dataset. If applied directly to millions of records, it consumes significant coordinating node memory resources. Only apply the `reverse` command to smaller datasets, typically after aggregation operations. | |||
There was a problem hiding this comment.
Update the warning note to reflect the optimized behavior.
The note says reverse always processes the entire dataset and consumes significant memory, but the new optimization section says it can be pushed down or even be a no‑op. This is now misleading for readers. Consider rephrasing to clarify that the heavy, full‑materialization behavior only applies when no reversible collation exists and pushdown can’t be applied.
📝 Possible rewording
-> **Note**: The `reverse` command processes the entire dataset. If applied directly to millions of records, it consumes significant coordinating node memory resources. Only apply the `reverse` command to smaller datasets, typically after aggregation operations.
+> **Note**: The `reverse` command can be optimized by reversing an existing sort or by using `@timestamp` when available. If no reversible collation is found and no `@timestamp` exists, `reverse` is a no-op. When `reverse` cannot be pushed down and must materialize results, it can consume significant coordinating node memory resources. Prefer using it after aggregation or with an explicit sort.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| > **Note**: The `reverse` command processes the entire dataset. If applied directly to millions of records, it consumes significant coordinating node memory resources. Only apply the `reverse` command to smaller datasets, typically after aggregation operations. | |
| > **Note**: The `reverse` command can be optimized by reversing an existing sort or by using `@timestamp` when available. If no reversible collation is found and no `@timestamp` exists, `reverse` is a no-op. When `reverse` cannot be pushed down and must materialize results, it can consume significant coordinating node memory resources. Prefer using it after aggregation or with an explicit sort. |
🤖 Prompt for AI Agents
In `@docs/user/ppl/cmd/reverse.md` at line 6, Update the warning for the reverse
command to clarify its optimized behavior: state that reverse may be pushed down
or be a no-op when a reversible collation exists, and that the heavy
full-dataset materialization and high coordinating-node memory usage only occur
when no reversible collation is available and pushdown cannot be applied;
mention the terms "reverse", "reversible collation", "pushdown", and "no-op" so
readers can relate this to the optimization section.
| ## Example 8: Reverse with @timestamp field | ||
|
|
||
| When no explicit sort exists but the data source has an `@timestamp` field, `reverse` sorts by `@timestamp` in descending order: | ||
|
|
||
| ```ppl | ||
| source=time_test_data | ||
| | fields value, category, `@timestamp` | ||
| | reverse | ||
| | head 5 | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for the referenced indices and sample data documentation
echo "=== Searching for time_test_data index references ==="
rg -n 'time_test_data' docs --type md
echo -e "\n=== Searching for events index references ==="
rg -n 'source=events|index=events|"events"' docs --type md
echo -e "\n=== Searching for sample datasets documentation ==="
rg -n -i 'sample.*data|sample.*dataset|test.*data|fixture' docs --type md | head -30
echo -e "\n=== Check if there is any samples or test data documentation ==="
fd 'sample|fixture|data' docs --type f | head -20Repository: opensearch-project/sql
Length of output: 6551
🏁 Script executed:
#!/bin/bash
# Check the reverse.md file for any sample data documentation or intro
echo "=== reverse.md structure and content around examples ==="
head -50 docs/user/ppl/cmd/reverse.md
echo -e "\n=== Check if there's an intro section defining sample data ==="
rg -B5 -A5 'time_test_data' docs/user/ppl/cmd/reverse.md
echo -e "\n=== Check timechart.md for events documentation ==="
head -100 docs/user/ppl/cmd/timechart.md | tail -50
echo -e "\n=== Search for sample data setup documentation ==="
rg -n -i 'sample.*dataset|how.*to.*use.*example|sample.*index' docs --type md | head -20
echo -e "\n=== Check for getting started or setup docs ==="
fd -e md docs | xargs grep -l -i 'sample.*data\|test.*index\|setup\|getting.*started' | head -10Repository: opensearch-project/sql
Length of output: 5737
🏁 Script executed:
#!/bin/bash
# Check PPL documentation structure for intro/setup
echo "=== PPL docs structure ==="
ls -la docs/user/ppl/
echo -e "\n=== Check PPL intro/index page ==="
fd index.md docs/user/ppl --exec cat {} \;
echo -e "\n=== Look for reference to sample indices in PPL docs ==="
rg -n 'accounts|events' docs/user/ppl/general --type md
echo -e "\n=== Check example 13 in reverse.md ==="
sed -n '330,340p' docs/user/ppl/cmd/reverse.md
echo -e "\n=== Search for accounts index documentation ==="
rg -n 'source=accounts' docs --type md
echo -e "\n=== Check if there's a README or intro for PPL ==="
cat docs/user/ppl/README.md 2>/dev/null || echo "No README.md found"Repository: opensearch-project/sql
Length of output: 37180
Replace time_test_data with a documented sample index.
Example 8 uses source=time_test_data, which has no references elsewhere in the documentation. All other examples in this file use source=accounts, which is the standard documented sample index across PPL documentation. Update Example 8 to use source=accounts for consistency. Example 13's use of source=events is already established in other command documentation (timechart.md, etc.).
🤖 Prompt for AI Agents
In `@docs/user/ppl/cmd/reverse.md` around lines 209 - 218, Example 8 in the
reverse command docs uses source=time_test_data which is undocumented; update
the example to use the documented sample index by replacing
source=time_test_data with source=accounts in the Example 8 block (the code
block that begins with "source=time_test_data" and includes the reverse and head
5 pipeline) so it matches the other examples that use the accounts sample index.
|
There's some Calcite physical optimizer merging issue |
Signed-off-by: Kai Huang <ahkcs@amazon.com>
|
@songkant-aws Updated PR with a new approach to resolve the issue mentioned, details in the PR description |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java`:
- Around line 749-775: The current shuttle in insertReversedSortInTree inserts
the reversed LogicalSort immediately after the first Sort with collation, which
can place it below a limit barrier and change semantics; update the shuttle to
treat a Sort with getFetch() != null or getOffset() != null as a barrier and,
when such a barrier is encountered during traversal, wrap that barrier (the
visited barrier node) with the reversed sort instead of inserting below the
downstream Sort. Concretely, inside the anonymous RelHomogeneousShuttle in
insertReversedSortInTree, detect org.apache.calcite.rel.core.Sort where
getFetch()!=null || getOffset()!=null and, upon visiting it, if the downstream
collation Sort was found during traversal, return
org.apache.calcite.rel.logical.LogicalSort.create(visitedBarrier,
reversedCollation, null, null) so the reversed sort sits above the limit; keep
using sortFound to avoid multiple inserts and ensure you still recurse via
super.visit(other) to obtain visited children before wrapping.
- Around line 781-806: The current in-place replacement uses
MetadataQuery.collations() (which can return input-propagated collation) and
then replaces a Sort node including its fetch, which breaks fetch-only Sort
semantics; change the logic in CalciteRelNodeVisitor so that when currentNode is
an instance of org.apache.calcite.rel.core.Sort you first inspect
existingSort.getCollation(): if existingSort.getCollation() is non-empty,
compute reversedCollation from that and do the in-place replacement via
PlanUtils.replaceTop as before; but if existingSort.getCollation() is null or
empty (fetch-only Sort), do NOT replaceTop with a new LogicalSort containing
fetch — instead compute the reversed collation from metadata (if needed) but
call context.relBuilder.sort(reversedCollation) to add a separate Sort after the
limit so the "limit then reverse" semantics are preserved.
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java`:
- Around line 21-111: Update the CalciteNoPushdownIT class javadoc to reflect
the actual suite contents (only CalciteExplainIT is enabled) and add an inline
code comment in the class (near the `@Suite.SuiteClasses` annotation) explaining
why the remaining tests are commented out or intentionally excluded (e.g.,
resource/time constraints, flaky tests, or strategic coverage choice),
referencing the class name CalciteNoPushdownIT and the enabled test symbol
CalciteExplainIT.class so maintainers can understand the reduced scope and
rationale.
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java`:
- Around line 288-313: The top comment in the testStreamstatsByWithReverse
method is incorrect: replace the line that says "Test that reverse is ignored
after streamstats with partitioning (by clause)" with a brief description that
reverse is effective when backtracking is enabled (i.e., streamstats with
partitioning supports reversing the __stream_seq__ order), so the test's
expectations (reversed rows) match the comment; update only the comment in
testStreamstatsByWithReverse to reflect that behavior.
In `@ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLReverseTest.java`:
- Around line 214-235: Add a new unit test method named testSortHeadReverse that
constructs the PPL "source=EMP | sort SAL | head 5 | reverse", calls
getRelNode(ppl), and asserts the logical plan via verifyLogical(root,
expectedLogical) where expectedLogical represents a top-level DESC reverse sort
above an ASC sort+fetch (e.g., LogicalSort(sort0=[$5], dir0=[DESC-nulls-last])
above LogicalSort(sort0=[$5], dir0=[ASC-nulls-first], fetch=[5]) above
LogicalTableScan(...)); this ensures the reverse is applied after the limit and
prevents the backtracking bug.
🧹 Nitpick comments (1)
integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java (1)
181-192: Clarify the "natural order" assumption in the comment.Line 190 states "Without sort or
@timestamp, reverse is ignored, so data comes in natural order" and line 191 claims this is "ascending by account_number". OpenSearch's natural order is not guaranteed to be byaccount_number. Consider updating the comment to clarify this is the expected order for the specific test data loading, or useverifyDataRows(unordered) if the specific order isn't critical to the test's purpose.
core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java
Show resolved
Hide resolved
core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java
Show resolved
Hide resolved
| @Suite.SuiteClasses({CalciteExplainIT.class | ||
| // CalciteAddTotalsCommandIT.class, | ||
| // CalciteAddColTotalsCommandIT.class, | ||
| // CalciteArrayFunctionIT.class, | ||
| // CalciteBinCommandIT.class, | ||
| // CalciteConvertTZFunctionIT.class, | ||
| // CalciteCsvFormatIT.class, | ||
| // CalciteDataTypeIT.class, | ||
| // CalciteDateTimeComparisonIT.class, | ||
| // CalciteDateTimeFunctionIT.class, | ||
| // CalciteDateTimeImplementationIT.class, | ||
| // CalciteDedupCommandIT.class, | ||
| // CalciteDescribeCommandIT.class, | ||
| // CalciteExpandCommandIT.class, | ||
| // CalciteFieldsCommandIT.class, | ||
| // CalciteFillNullCommandIT.class, | ||
| // CalciteFlattenCommandIT.class, | ||
| // CalciteFlattenDocValueIT.class, | ||
| // CalciteGeoIpFunctionsIT.class, | ||
| // CalciteGeoPointFormatsIT.class, | ||
| // CalciteHeadCommandIT.class, | ||
| // CalciteInformationSchemaCommandIT.class, | ||
| // CalciteIPComparisonIT.class, | ||
| // CalciteIPFunctionsIT.class, | ||
| // CalciteJsonFunctionsIT.class, | ||
| // CalciteLegacyAPICompatibilityIT.class, | ||
| // CalciteLikeQueryIT.class, | ||
| // CalciteMathematicalFunctionIT.class, | ||
| // CalciteMultisearchCommandIT.class, | ||
| // CalciteMultiValueStatsIT.class, | ||
| // CalciteNewAddedCommandsIT.class, | ||
| // CalciteNowLikeFunctionIT.class, | ||
| // CalciteObjectFieldOperateIT.class, | ||
| // CalciteOperatorIT.class, | ||
| // CalciteParseCommandIT.class, | ||
| // CalcitePPLAggregationIT.class, | ||
| // CalcitePPLAppendcolIT.class, | ||
| // CalcitePPLAppendCommandIT.class, | ||
| // CalcitePPLBasicIT.class, | ||
| // CalcitePPLBuiltinDatetimeFunctionInvalidIT.class, | ||
| // CalcitePPLBuiltinFunctionIT.class, | ||
| // CalcitePPLBuiltinFunctionsNullIT.class, | ||
| // CalcitePPLCaseFunctionIT.class, | ||
| // CalcitePPLCastFunctionIT.class, | ||
| // CalcitePPLConditionBuiltinFunctionIT.class, | ||
| // CalcitePPLCryptographicFunctionIT.class, | ||
| // CalcitePPLDedupIT.class, | ||
| // CalcitePPLEventstatsIT.class, | ||
| // CalciteStreamstatsCommandIT.class, | ||
| // CalcitePPLExistsSubqueryIT.class, | ||
| // CalcitePPLExplainIT.class, | ||
| // CalcitePPLFillnullIT.class, | ||
| // CalcitePPLGrokIT.class, | ||
| // CalcitePPLInSubqueryIT.class, | ||
| // CalcitePPLIPFunctionIT.class, | ||
| // CalcitePPLJoinIT.class, | ||
| // CalcitePPLJsonBuiltinFunctionIT.class, | ||
| // CalcitePPLLookupIT.class, | ||
| // CalcitePPLParseIT.class, | ||
| // CalcitePPLPatternsIT.class, | ||
| // CalcitePPLPluginIT.class, | ||
| // CalcitePPLRenameIT.class, | ||
| // CalcitePPLScalarSubqueryIT.class, | ||
| // CalcitePPLSortIT.class, | ||
| // CalcitePPLSpathCommandIT.class, | ||
| // CalcitePPLStringBuiltinFunctionIT.class, | ||
| // CalcitePPLTrendlineIT.class, | ||
| // CalcitePrometheusDataSourceCommandsIT.class, | ||
| // CalciteQueryAnalysisIT.class, | ||
| // CalciteRareCommandIT.class, | ||
| // CalciteRegexCommandIT.class, | ||
| // CalciteReverseCommandIT.class, | ||
| // CalciteRexCommandIT.class, | ||
| // CalciteRenameCommandIT.class, | ||
| // CalciteReplaceCommandIT.class, | ||
| // CalciteResourceMonitorIT.class, | ||
| // CalciteSearchCommandIT.class, | ||
| // CalciteSettingsIT.class, | ||
| // CalciteShowDataSourcesCommandIT.class, | ||
| // CalciteSortCommandIT.class, | ||
| // CalciteStatsCommandIT.class, | ||
| // CalciteSystemFunctionIT.class, | ||
| // CalciteTextFunctionIT.class, | ||
| // CalciteTopCommandIT.class, | ||
| // CalciteTrendlineCommandIT.class, | ||
| // CalciteTransposeCommandIT.class, | ||
| // CalciteVisualizationFormatIT.class, | ||
| // CalciteWhereCommandIT.class, | ||
| // CalcitePPLTpchIT.class, | ||
| // CalciteMvCombineCommandIT.class | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check git history for this file to understand when tests were commented out
echo "=== Git history for CalciteNoPushdownIT.java ==="
git log --oneline -10 -- "integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java" 2>/dev/null || echo "Git history not available"
# Check if CalciteReverseCommandIT exists
echo -e "\n=== Searching for CalciteReverseCommandIT ==="
fd -e java "CalciteReverseCommandIT" 2>/dev/null || echo "File not found"
# Check the current structure of CalciteNoPushdownIT.java
echo -e "\n=== Current CalciteNoPushdownIT.java class structure ==="
cat -n integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java | head -20
# Count commented vs uncommented test classes
echo -e "\n=== Test class counts ==="
cat integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java | grep -c "CalciteExplainIT.class"
echo "Commented out test classes:"
cat integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java | grep "//.*CalciteTest" | wc -l
# Check if there's a separate "with pushdown" suite to compare
echo -e "\n=== Looking for other test suites ==="
fd -e java "CalcitePushdownIT\|CalciteIT\.java" integ-test/ 2>/dev/null || echo "No matching files"Repository: opensearch-project/sql
Length of output: 1119
🏁 Script executed:
#!/bin/bash
# Read the actual file to see current state
echo "=== Full CalciteNoPushdownIT.java ==="
wc -l integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java
echo -e "\n=== File content ==="
cat integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java
# Check the git commit message for context
echo -e "\n=== Git commit details ==="
git log -1 --format="%H %s %b" 1fa7308e 2>/dev/null || echo "Commit details not available"
# Check CalciteReverseCommandIT content
echo -e "\n=== CalciteReverseCommandIT.java exists (first 50 lines) ==="
head -50 integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.javaRepository: opensearch-project/sql
Length of output: 6411
🏁 Script executed:
#!/bin/bash
# Check if there are other test suite runners that might include these disabled tests
echo "=== Searching for other Calcite test suite files ==="
fd -e java "Suite\|Runner" integ-test/src/test/java/org/opensearch/sql/calcite/ 2>/dev/null | grep -v "\.class"
# Check for any test configuration or gradle tasks
echo -e "\n=== Checking for test-related gradle configurations ==="
fd -e gradle "build\|test" integ-test/ | head -5
# Check if there's a parent suite or if CalciteExplainIT runs these tests
echo -e "\n=== CalciteExplainIT.java content (first 100 lines) ==="
find integ-test -name "CalciteExplainIT.java" -exec cat {} \; | head -100
# Check for comments or documentation about the disabled tests
echo -e "\n=== Checking git history for more context ==="
git show 1fa7308e --stat 2>/dev/null | head -30Repository: opensearch-project/sql
Length of output: 6122
🏁 Script executed:
#!/bin/bash
# Verify CalciteReverseCommandIT is in the remote package and check test count
echo "=== Verifying CalciteReverseCommandIT location and test methods ==="
grep -c "@Test" integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java
# Check if any other test suite or configuration references these disabled tests
echo -e "\n=== Checking for any test suite includes or references ==="
rg "CalciteAddTotalsCommandIT|CalciteArrayFunctionIT|CalciteBinCommandIT" integ-test/ --type java 2>/dev/null | head -5
# Check what the intent might be - look for TODOs or FIXMEs
echo -e "\n=== Checking for TODO/FIXME comments in CalciteNoPushdownIT ==="
rg "TODO|FIXME|WIP|disabled|comment" integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java 2>/dev/null || echo "None found"
# Verify the javadoc statement
echo -e "\n=== Extracting exact javadoc text ==="
sed -n '15,19p' integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.javaRepository: opensearch-project/sql
Length of output: 1170
Update javadoc to accurately reflect which tests run in the no-pushdown suite.
The class javadoc claims "This test suite runs all remote Calcite integration tests without pushdown enabled," but only CalciteExplainIT.class is enabled in @Suite.SuiteClasses. The remaining ~90 test classes are commented out with no explanation.
This creates a misleading contract for maintainers. Either:
- Update the javadoc to document which specific tests run and why others are disabled, or
- Add a code comment explaining the strategic rationale for this reduction (especially relevant since
CalciteReverseCommandIT, which has 27 tests directly related to reverse optimization, is disabled from no-pushdown coverage).
🤖 Prompt for AI Agents
In `@integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java`
around lines 21 - 111, Update the CalciteNoPushdownIT class javadoc to reflect
the actual suite contents (only CalciteExplainIT is enabled) and add an inline
code comment in the class (near the `@Suite.SuiteClasses` annotation) explaining
why the remaining tests are commented out or intentionally excluded (e.g.,
resource/time constraints, flaky tests, or strategic coverage choice),
referencing the class name CalciteNoPushdownIT and the enabled test symbol
CalciteExplainIT.class so maintainers can understand the reduced scope and
rationale.
integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java
Show resolved
Hide resolved
ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLReverseTest.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLTimechartTest.java`:
- Around line 365-419: Add null/boundary/error tests in CalcitePPLTimechartTest
to cover reverse+timechart edge cases: add a test method (e.g.,
testTimechartWithReverse_NullTimefield) that runs "source=events | timechart
timefield=@timestamp span=1m count() | reverse" on data where `@timestamp` can be
NULL and verify the logical plan and SQL via getRelNode, verifyLogical and
verifyPPLToSparkSQL include proper IS NULL handling or expected ordering; add a
boundary/empty-input test (e.g., testTimechartWithReverse_EmptySource) that runs
the pipeline on an empty source and asserts the plan still generates a valid
aggregate/sort; and add an invalid-input test (e.g.,
testTimechartWithReverse_InvalidTimefield) that calls getRelNode with a
nonexistent timefield and asserts the parser/optimizer throws the expected
exception using assertThrows (or the project's error-assertion helper). Ensure
each new test references the same helper methods (getRelNode, verifyLogical,
verifyPPLToSparkSQL) and asserts the specific failure message or null-handling
behavior.
| // ==================== Timechart with Reverse tests ==================== | ||
| // These tests verify that reverse works correctly with timechart. | ||
| // Timechart always adds a sort at the end of its plan, so reverse will | ||
| // find the collation via metadata query (tier 1) and flip the sort direction. | ||
|
|
||
| @Test | ||
| public void testTimechartWithReverse() { | ||
| // Timechart adds ORDER BY @timestamp ASC at the end | ||
| // Reverse should flip it to DESC | ||
| String ppl = "source=events | timechart count() | reverse"; | ||
| RelNode root = getRelNode(ppl); | ||
| // Reverse replaces the timechart's ASC sort in-place with DESC | ||
| String expectedLogical = | ||
| "LogicalSort(sort0=[$0], dir0=[DESC])\n" | ||
| + " LogicalProject(@timestamp=[$0], count()=[$1])\n" | ||
| + " LogicalAggregate(group=[{0}], count()=[COUNT()])\n" | ||
| + " LogicalProject(@timestamp0=[SPAN($0, 1, 'm')])\n" | ||
| + " LogicalFilter(condition=[IS NOT NULL($0)])\n" | ||
| + " LogicalTableScan(table=[[scott, events]])\n"; | ||
| verifyLogical(root, expectedLogical); | ||
|
|
||
| String expectedSparkSql = | ||
| "SELECT SPAN(`@timestamp`, 1, 'm') `@timestamp`, COUNT(*) `count()`\n" | ||
| + "FROM `scott`.`events`\n" | ||
| + "WHERE `@timestamp` IS NOT NULL\n" | ||
| + "GROUP BY SPAN(`@timestamp`, 1, 'm')\n" | ||
| + "ORDER BY 1 DESC NULLS FIRST"; | ||
| verifyPPLToSparkSQL(root, expectedSparkSql); | ||
| } | ||
|
|
||
| @Test | ||
| public void testTimechartWithCustomTimefieldAndReverse() { | ||
| // Timechart with custom timefield should also work with reverse | ||
| // The sort is on created_at (the custom field), not @timestamp | ||
| String ppl = "source=events | timechart timefield=created_at span=1month count() | reverse"; | ||
| RelNode root = getRelNode(ppl); | ||
|
|
||
| // Reverse replaces the timechart's ASC sort in-place with DESC | ||
| String expectedLogical = | ||
| "LogicalSort(sort0=[$0], dir0=[DESC])\n" | ||
| + " LogicalProject(created_at=[$0], count()=[$1])\n" | ||
| + " LogicalAggregate(group=[{0}], count()=[COUNT()])\n" | ||
| + " LogicalProject(created_at0=[SPAN($1, 1, 'M')])\n" | ||
| + " LogicalFilter(condition=[IS NOT NULL($1)])\n" | ||
| + " LogicalTableScan(table=[[scott, events]])\n"; | ||
| verifyLogical(root, expectedLogical); | ||
|
|
||
| String expectedSparkSql = | ||
| "SELECT SPAN(`created_at`, 1, 'M') `created_at`, COUNT(*) `count()`\n" | ||
| + "FROM `scott`.`events`\n" | ||
| + "WHERE `created_at` IS NOT NULL\n" | ||
| + "GROUP BY SPAN(`created_at`, 1, 'M')\n" | ||
| + "ORDER BY 1 DESC NULLS FIRST"; | ||
| verifyPPLToSparkSQL(root, expectedSparkSql); | ||
| } |
There was a problem hiding this comment.
Add null/boundary/error coverage for the new reverse+timechart paths.
These new tests cover the happy path, but the test guidelines require null-input, boundary, and invalid-input coverage for new functionality. Please add at least one case (e.g., NULL timefield values and an invalid timefield) to satisfy those requirements. As per coding guidelines: "NULL input tests must be included for all new functions", "Include boundary condition tests (min/max values, empty inputs) for all new functions", and "Include error condition tests (invalid inputs, exceptions) for all new functions".
🤖 Prompt for AI Agents
In
`@ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLTimechartTest.java`
around lines 365 - 419, Add null/boundary/error tests in CalcitePPLTimechartTest
to cover reverse+timechart edge cases: add a test method (e.g.,
testTimechartWithReverse_NullTimefield) that runs "source=events | timechart
timefield=@timestamp span=1m count() | reverse" on data where `@timestamp` can be
NULL and verify the logical plan and SQL via getRelNode, verifyLogical and
verifyPPLToSparkSQL include proper IS NULL handling or expected ordering; add a
boundary/empty-input test (e.g., testTimechartWithReverse_EmptySource) that runs
the pipeline on an empty source and asserts the plan still generates a valid
aggregate/sort; and add an invalid-input test (e.g.,
testTimechartWithReverse_InvalidTimefield) that calls getRelNode with a
nonexistent timefield and asserts the parser/optimizer throws the expected
exception using assertThrows (or the project's error-assertion helper). Ensure
each new test references the same helper methods (getRelNode, verifyLogical,
verifyPPLToSparkSQL) and asserts the specific failure message or null-handling
behavior.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java`:
- Around line 181-192: In testReverseIgnoredWithoutSortOrTimestamp, remove the
brittle order assertion by replacing the call to verifyDataRowsInOrder(result,
rows(1), rows(6), rows(13)) with an order-independent check: fetch the result
rows from executeQuery (same TEST_INDEX_BANK query) and assert that the set of
returned account_number values equals the expected set {1,6,13} (or compare the
result to a baseline executeQuery without the reverse pipe), using the test
helper that asserts unordered equality (or implement a small helper to compare
collections ignoring order) so the test no longer depends on natural execution
order; keep verifySchema as-is.
🧹 Nitpick comments (2)
ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLReverseTest.java (1)
252-257: Clarify comment: SAL is available during sorting, removed only in final output.The comment states "The sort field (SAL) is removed from schema by fields command" which could be misleading. Looking at the logical plan, the
LogicalProject(ENAME=[$1])sits on top of both sorts, meaning SAL is fully available when sorting occurs. The projection only removes SAL from the final output, not from the sort's scope.Consider rewording to clarify the behavior:
`@Test` public void testSortFieldsReverse() { - // Test backtracking: sort on SAL, then project only ENAME, then reverse - // The sort field (SAL) is removed from schema by fields command - // But reverse should still work by backtracking to find the sort + // Test backtracking: sort on SAL, then project only ENAME, then reverse + // The sort happens before projection, so SAL is available for sorting + // Reverse backtracks through the project to find and reverse the sort String ppl = "source=EMP | sort SAL | fields ENAME | reverse";core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java (1)
783-845: Method exceeds 50-line guideline; consider extracting helper methods.The implementation is correct and properly handles all edge cases including the metadata propagation issue:
- Lines 800-803 check
existingSort.getCollation()(the node's own collation) rather than relying solely on metadata- The
fetch == null && offset == nullcheck ensures fetch-only Sorts go to the else branch, correctly adding the reversed sort on topHowever, at ~62 lines this method exceeds the 50-line guideline for this file. Consider extracting the main branches into focused helpers:
♻️ Suggested structure
private void handleReverseSortInPlace(Sort existingSort, CalcitePlanContext context) { // Lines 804-811: Pure collation sort replacement } private void handleReverseViaBacktracking(RelNode currentNode, CalcitePlanContext context) { // Lines 820-842: Backtracking + `@timestamp` fallback }
| @Test | ||
| public void testReverseIgnoredWithoutSortOrTimestamp() throws IOException { | ||
| // Test that reverse is ignored when there's no explicit sort and no @timestamp field | ||
| // BANK index doesn't have @timestamp, so reverse should be ignored | ||
| JSONObject result = | ||
| executeQuery( | ||
| String.format("source=%s | fields account_number | reverse | head 3", TEST_INDEX_BANK)); | ||
| verifySchema(result, schema("account_number", "bigint")); | ||
| // Without sort or @timestamp, reverse is ignored, so data comes in natural order | ||
| // The first 3 documents in natural order (ascending by account_number) | ||
| verifyDataRowsInOrder(result, rows(1), rows(6), rows(13)); | ||
| } |
There was a problem hiding this comment.
Avoid asserting natural order for the no-sort case.
Line 189–191 assumes a deterministic “natural order” without an explicit sort, which can be unstable and make the test flaky. Prefer an order-independent assertion (or compare to a baseline query without reverse) so the test doesn’t rely on execution order.
[suggested fix]
🔧 One option: remove head and assert unordered rows
- String.format("source=%s | fields account_number | reverse | head 3", TEST_INDEX_BANK));
+ String.format("source=%s | fields account_number | reverse", TEST_INDEX_BANK));
...
- // Without sort or `@timestamp`, reverse is ignored, so data comes in natural order
- // The first 3 documents in natural order (ascending by account_number)
- verifyDataRowsInOrder(result, rows(1), rows(6), rows(13));
+ // Without sort or `@timestamp`, reverse is ignored; assert content without order.
+ verifyDataRows(result, rows(1), rows(6), rows(13), rows(18), rows(20), rows(25), rows(32));As per coding guidelines, “Tests must not rely on execution order; ensure test independence”.
🤖 Prompt for AI Agents
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java`
around lines 181 - 192, In testReverseIgnoredWithoutSortOrTimestamp, remove the
brittle order assertion by replacing the call to verifyDataRowsInOrder(result,
rows(1), rows(6), rows(13)) with an order-independent check: fetch the result
rows from executeQuery (same TEST_INDEX_BANK query) and assert that the set of
returned account_number values equals the expected set {1,6,13} (or compare the
result to a baseline executeQuery without the reverse pipe), using the test
helper that asserts unordered equality (or implement a small helper to compare
collections ignoring order) so the test no longer depends on natural execution
order; keep verifySchema as-is.
Description
Originally from #4056 by @selsong
This PR implements a significant performance optimization for the
reversecommand by eliminating the expensive ROW_NUMBER() window function and implementing a three-tier logic based on query context.Resolves #3924
Resolves #5125
Motivation
The previous implementation used ROW_NUMBER() window function which:
Solution: Three-Tier Reverse Logic
The
reversecommand now follows context-aware behavior:Key Implementation Details
reverseis applied directly on top of a sort, the sort node is replaced in-place usingPlanUtils.replaceTop()rather than stacking a new sort. This avoids a Calcite optimizer issue where consecutiveLogicalSortnodes get merged, discarding the reversed direction (Reverse Optimization fails when creating consecutive sorts due to Calcite physical optimizer merging #5125).reverseis separated from the sort by non-blocking operators (where,eval,fields), the implementation walks the RelNode tree to find the upstream sort and inserts a reversed sort viaRelHomogeneousShuttle.Examples
1. Reverse with Explicit Sort (Primary Use Case)
Query:
Behavior: Flips all sort directions:
+balance, -firstname→-balance, +firstnameLogical Plan:
Physical Plan: (efficiently pushes reversed sort to OpenSearch)
2. Reverse with @timestamp (Time-Series Optimization)
Query:
Behavior: When no explicit sort exists but the index has an @timestamp field, reverse automatically sorts by @timestamp DESC to show most recent events first.
Use Case: Common pattern in log analysis - users want recent logs first
Logical Plan:
3. Reverse Ignored (No-Op Case)
Query:
Behavior: When there's no explicit sort AND no @timestamp field, reverse is ignored. Results appear in natural index order.
Rationale: Avoid expensive operations when reverse has no meaningful semantic interpretation.
Logical Plan:
Note: No sort node is added - reverse is completely ignored.
4. Double Reverse (Cancellation)
Query:
Behavior: Two reverses cancel each other out, returning to original sort order.
Logical Plan:
Final sort order matches original query:
+balance, -firstname5. Backtracking Through Non-Blocking Operators
Query:
Behavior: Reverse walks past the
where(filter) node to find and reverse the upstream sort.Logical Plan:
6. Reverse After Aggregation (Blocking Operator)
Query:
Behavior: Aggregation is a blocking operator. The sort after aggregation is visible to reverse and gets flipped.
7. Timechart with Reverse
Query:
Behavior: Timechart produces a time-bucketed aggregation with an implicit sort. Reverse flips the time order from ASC to DESC.
Related Issues
Resolves #3924
Resolves #5125
Check List
--signoff.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.