Skip to content

[FEA] View Type PQ Preprocessor#1764

Open
tarang-jain wants to merge 56 commits intorapidsai:mainfrom
tarang-jain:view-pq-quantizer
Open

[FEA] View Type PQ Preprocessor#1764
tarang-jain wants to merge 56 commits intorapidsai:mainfrom
tarang-jain:view-pq-quantizer

Conversation

@tarang-jain
Copy link
Copy Markdown
Contributor

@tarang-jain tarang-jain commented Feb 3, 2026

Separate VPQ codebooks from encoded data by introducing vpq_codebooks<MathT> (PIMPL with owning/view variants). vpq_dataset is now a simple {codebooks, data} struct. quantizer<T> holds only vpq_codebooks since it doesn't need encoded data. Adds a view-type pq::build() overload for pre-computed external codebooks.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Feb 3, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@tarang-jain tarang-jain self-assigned this Feb 3, 2026
@tarang-jain tarang-jain added feature request New feature or request non-breaking Introduces a non-breaking change labels Feb 3, 2026
@cjnolet cjnolet moved this to In Progress in Unstructured Data Processing Feb 6, 2026
@tarang-jain tarang-jain changed the base branch from main to release/26.04 March 13, 2026 23:21
@tarang-jain tarang-jain marked this pull request as ready for review March 13, 2026 23:28
@tarang-jain tarang-jain requested review from a team as code owners March 13, 2026 23:28
Comment thread cpp/include/cuvs/preprocessing/quantize/pq.hpp Outdated
Comment thread cpp/src/preprocessing/quantize/detail/pq.cuh Outdated
Comment thread cpp/src/preprocessing/quantize/detail/pq.cuh Outdated
Comment thread cpp/include/cuvs/neighbors/common.hpp Outdated
Comment thread cpp/src/neighbors/vpq_dataset_impl.hpp Outdated
@tarang-jain tarang-jain requested review from lowener and tfeher April 6, 2026 22:12
Copy link
Copy Markdown
Contributor

@lowener lowener left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks very good, thanks! Just have a remark on build()

Comment thread cpp/tests/neighbors/ann_scann.cuh Outdated
Comment thread cpp/src/neighbors/detail/vamana/vamana_build.cuh Outdated
Comment thread cpp/src/preprocessing/quantize/detail/vpq_dataset_impl.hpp Outdated
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 30, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Replaces legacy VPQ dataset usage with a new VPQ codebooks API under cuvs::preprocessing::quantize::pq, adds a view-based quantizer build overload accepting device-matrix views (PQ ± optional VQ), and updates call sites, serialization, tests, and the C API to use method-based codebook accessors.

Changes

Cohort / File(s) Summary
Core VPQ API & types
cpp/include/cuvs/neighbors/common.hpp, cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp
Removes legacy vpq_dataset from neighbors/common.hpp; introduces vpq_codebooks_iface, vpq_codebooks, and a new vpq_dataset under cuvs::preprocessing::quantize::pq with metadata helpers and type traits.
Quantizer API & storage
cpp/include/cuvs/preprocessing/quantize/pq.hpp, cpp/src/preprocessing/quantize/pq.cu, cpp/src/preprocessing/quantize/detail/pq.cuh
Quantizer now stores codebooks (vpq_codebooks) instead of a dataset; adds a build overload that constructs a view-type quantizer from PQ centers and optional VQ centers and updates transform/inverse paths to use codebooks.
VPQ codebook implementations
cpp/src/preprocessing/quantize/detail/vpq_dataset_impl.hpp, cpp/src/preprocessing/quantize/detail/pq.cuh
Adds owning/view implementations (vpq_codebooks_owning, vpq_codebooks_view) that expose PQ view and optional VQ view; updates conversion and build helpers to operate on these containers.
Serialization & dataset dispatch
cpp/src/neighbors/detail/dataset_serialize.hpp, cpp/src/neighbors/detail/vpq_dataset.cuh
Switches serialization/deserialization and runtime dispatcher checks to fully-qualified cuvs::preprocessing::quantize::pq::vpq_dataset; serializes optional VQ properly and constructs datasets via vpq_codebooks.
Call sites & algorithm integration
cpp/src/neighbors/detail/cagra/..., cpp/src/neighbors/detail/vamana/vamana_build.cuh, cpp/src/neighbors/scann/detail/scann_build.cuh, cpp/src/neighbors/detail/vpq_dataset.cuh, cpp/src/neighbors/detail/cagra/cagra_search.cuh, cpp/src/neighbors/detail/cagra/compute_distance_vpq.hpp, cpp/src/neighbors/detail/cagra/factory.cuh
Updates includes, dynamic_casts/type aliases, trait checks, and codebook accessors to use fully-qualified vpq_dataset and .pq_code_book()/ .vq_code_book() accessors; adapts key/cache computations and kernel argument selection to new handles/optionals.
Benchmarks & wrappers
cpp/bench/ann/src/cuvs/cuvs_cagra_wrapper.h, cpp/bench/ann/src/diskann/diskann_benchmark.cpp
Adjusts VPQ dataset type checks to the new cuvs::preprocessing::quantize::pq::vpq_dataset types and fixes JSON parsing to set param.alpha when "alpha" is present.
Tests
cpp/tests/neighbors/ann_scann.cuh, cpp/tests/preprocessing/product_quantization.cu
Updates tests to use new pq::build API; adds ViewQuantizer test validating view-based quantizer construction and equivalence with owning quantizer; adjusts label/view handling.
C API fixes
c/src/preprocessing/quantize/pq.cpp
C API functions now obtain PQ codebook via quantizer->codebooks.pq_code_book() and handle optional VQ via quantizer->codebooks.vq_code_book() before converting to DLPack, replacing prior direct .view() access.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested labels

C++

Suggested reviewers

  • viclafargue
🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.41% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title '[FEA] View Type PQ Preprocessor' clearly and specifically describes the main feature: introducing a view-type PQ (Product Quantizer) preprocessor, which aligns with the primary changeset of separating VPQ codebooks from encoded data and adding a view-type pq::build() overload.
Description check ✅ Passed The description is directly related to the changeset, explaining the key architectural changes: introducing vpq_codebooks with PIMPL variants, restructuring vpq_dataset, and updating quantizer to hold only codebooks.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/include/cuvs/preprocessing/quantize/pq.hpp`:
- Around line 89-94: The public struct quantizer<T> changed its field name to
codebooks and needs a compat shim: add a deprecated alias/forwarding member for
the old field name (e.g., the previous "codebook" identifier) that forwards to
the new codebooks member, mark it with your deprecation attribute/ macro (or C++
[[deprecated]]), and keep params_quantizer and codebooks unchanged; update
public header comments and add a short note to the migration guide mentioning
the rename and removal timeline so consumers can switch to quantizer::codebooks
before the deprecated alias is removed.
- Around line 81-84: Update the Doxygen on the quantizer declaration to describe
vpq_codebooks (not vpq_dataset): explain that the quantizer holds a
vpq_codebooks instance and whether it owns the codebooks or holds a non-owning
reference, and clarify ownership semantics for public API consumers. Edit the
comment above the quantizer class/struct (symbol: quantizer) to mention
vpq_codebooks, state if ownership is owning vs non-owning (referencing external
codebooks), and ensure the wording follows existing Doxygen style used elsewhere
in the header.

In `@cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp`:
- Line 80: vpq_codebooks and vpq_dataset allow default construction leaving
impl_ == nullptr while accessors (e.g., dim(), codebook_count(), codebook(...),
all methods that dereference impl_) assume impl_ is valid; fix by ensuring
default construction yields a valid, empty impl_ (e.g., initialize impl_ with
std::make_shared<impl_type>() in the default ctor of vpq_codebooks and
vpq_dataset) or alternatively make the default ctors deleted and force
construction with proper parameters, and add defensive checks in accessors (in
vpq_codebooks methods and vpq_dataset::dim()) to return safe defaults (0 or
empty) or throw a clear exception if impl_ is null; update the vpq_codebooks()
default ctor and any methods that dereference impl_ accordingly.

In `@cpp/src/preprocessing/quantize/detail/pq.cuh`:
- Around line 191-203: In build_view() ensure full VQ shape and consistency:
when params.use_vq is true, validate vq_centers has both
extent(0)==params.vq_n_centers and extent(1) equals the expected vector
dimension implied by pq_centers (or params.pq_dim), and that the pointer is
non-null; also reject any non-empty vq_centers when params.use_vq is false to
avoid inconsistent state. Update the RAFT_EXPECTS checks around vq_centers in
build_view() (and where vpq_codebooks / vpq_codebooks_view are constructed) to
include extent(1) validation and the params.use_vq vs vq_centers.has_value()
mismatch check so transform()/inverse_transform() cannot index out of bounds at
runtime.
- Around line 356-370: Add an explicit preflight check when using VQ: if
quantizer.params_quantizer.use_vq is true then validate that vq_labels is
provided and has the correct number of rows matching codes (e.g.,
RAFT_EXPECTS(vq_labels.extent(0) == codes.extent(0), "When use_vq is true
vq_labels must have the same number of rows as codes") and optionally validate
label column/shape), before calling reconstruct_vectors in inverse_transform();
this ensures we fail fast instead of silently reconstructing with label 0.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 7faf642d-d034-4dcf-ae64-0f067a7e75ca

📥 Commits

Reviewing files that changed from the base of the PR and between c999208 and 7247688.

📒 Files selected for processing (19)
  • c/src/preprocessing/quantize/pq.cpp
  • cpp/bench/ann/src/cuvs/cuvs_cagra_wrapper.h
  • cpp/bench/ann/src/diskann/diskann_benchmark.cpp
  • cpp/include/cuvs/neighbors/common.hpp
  • cpp/include/cuvs/preprocessing/quantize/pq.hpp
  • cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp
  • cpp/src/neighbors/detail/cagra/cagra_search.cuh
  • cpp/src/neighbors/detail/cagra/compute_distance_vpq.hpp
  • cpp/src/neighbors/detail/cagra/factory.cuh
  • cpp/src/neighbors/detail/dataset_serialize.hpp
  • cpp/src/neighbors/detail/vamana/vamana_build.cuh
  • cpp/src/neighbors/detail/vpq_dataset.cuh
  • cpp/src/neighbors/scann/detail/scann_build.cuh
  • cpp/src/preprocessing/quantize/detail/pq.cuh
  • cpp/src/preprocessing/quantize/detail/vpq_dataset_impl.hpp
  • cpp/src/preprocessing/quantize/pq.cu
  • cpp/src/preprocessing/quantize/vpq_build-ext.cuh
  • cpp/tests/neighbors/ann_scann.cuh
  • cpp/tests/preprocessing/product_quantization.cu
💤 Files with no reviewable changes (1)
  • cpp/include/cuvs/neighbors/common.hpp

Comment on lines 81 to +84
* @brief Defines and stores VPQ codebooks upon training
*
* @tparam T data element type
* The quantizer holds a vpq_dataset, which can either own the codebooks
* or non-owning (referencing external codebooks).
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Fix the quantizer Doxygen to describe vpq_codebooks, not vpq_dataset.

This comment no longer matches the type below, so the generated public API docs now describe the wrong ownership model.

As per coding guidelines, public C++ API headers must include complete Doxygen documentation for public functions/classes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/include/cuvs/preprocessing/quantize/pq.hpp` around lines 81 - 84, Update
the Doxygen on the quantizer declaration to describe vpq_codebooks (not
vpq_dataset): explain that the quantizer holds a vpq_codebooks instance and
whether it owns the codebooks or holds a non-owning reference, and clarify
ownership semantics for public API consumers. Edit the comment above the
quantizer class/struct (symbol: quantizer) to mention vpq_codebooks, state if
ownership is owning vs non-owning (referencing external codebooks), and ensure
the wording follows existing Doxygen style used elsewhere in the header.

Comment on lines 89 to 94
struct quantizer {
/** Parameters used to build this quantizer. */
params params_quantizer;
/** VPQ codebooks produced during training. */
cuvs::neighbors::vpq_dataset<T, int64_t> vpq_codebooks;
/** VPQ codebooks (owning or view). */
cuvs::preprocessing::quantize::pq::vpq_codebooks<T> codebooks;
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

This public member rename needs a deprecation or migration path.

quantizer<T> is part of the public header, so replacing the old codebook field with codebooks is a source-breaking change for existing callers. I don't see a compatibility shim or any deprecation marker here.

As per coding guidelines, breaking changes require deprecation warnings and migration guide updates.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/include/cuvs/preprocessing/quantize/pq.hpp` around lines 89 - 94, The
public struct quantizer<T> changed its field name to codebooks and needs a
compat shim: add a deprecated alias/forwarding member for the old field name
(e.g., the previous "codebook" identifier) that forwards to the new codebooks
member, mark it with your deprecation attribute/ macro (or C++ [[deprecated]]),
and keep params_quantizer and codebooks unchanged; update public header comments
and add a short note to the migration guide mentioning the rename and removal
timeline so consumers can switch to quantizer::codebooks before the deprecated
alias is removed.

Comment thread cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp
Comment thread cpp/src/preprocessing/quantize/detail/pq.cuh Outdated
Comment thread cpp/src/preprocessing/quantize/detail/pq.cuh
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/preprocessing/quantize/detail/pq.cuh (1)

441-487: ⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Critical: Duplicate function definition will cause linker errors.

make_pq_params_from_vpq is defined twice with identical signatures (lines 441-463 and lines 465-487). This will cause compilation/linker errors for "multiple definition" when this header is included.

🐛 Remove the duplicate definition
-inline auto make_pq_params_from_vpq(const cuvs::neighbors::vpq_params& in_params,
-                                    const uint64_t n_rows)
-  -> cuvs::preprocessing::quantize::pq::params
-{
-  const uint32_t pq_n_centers              = 1 << in_params.pq_bits;
-  uint32_t max_train_points_per_vq_cluster = in_params.max_train_points_per_vq_cluster;
-  if (in_params.vq_n_centers > 0) {
-    max_train_points_per_vq_cluster =
-      std::min<uint32_t>(max_train_points_per_vq_cluster,
-                         n_rows * in_params.vq_kmeans_trainset_fraction / in_params.vq_n_centers);
-  }
-  return cuvs::preprocessing::quantize::pq::params{
-    in_params.pq_bits,
-    in_params.pq_dim,
-    true,
-    true,
-    in_params.vq_n_centers,
-    in_params.kmeans_n_iters,
-    in_params.pq_kmeans_type,
-    std::min<uint32_t>(in_params.max_train_points_per_pq_code,
-                       n_rows * in_params.pq_kmeans_trainset_fraction / pq_n_centers),
-    max_train_points_per_vq_cluster};
-}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/preprocessing/quantize/detail/pq.cuh` around lines 441 - 487, The
function make_pq_params_from_vpq is defined twice (identical signatures) which
causes multiple-definition linker errors; remove the duplicate definition so
there is exactly one implementation of make_pq_params_from_vpq(const
cuvs::neighbors::vpq_params&, const uint64_t) in this header (delete one of the
two identical blocks and keep the other), ensuring any callers use that single
definition and no other duplicate remains.
🧹 Nitpick comments (1)
cpp/tests/preprocessing/product_quantization.cu (1)

243-302: ⚡ Quick win

Add test coverage for VQ-enabled view quantizer.

This test only exercises the PQ-only path (use_vq=false). The view-based quantizer should also be tested with use_vq=true to verify VQ codebook views work correctly, especially given the validation logic in build_view().

Additionally, consider adding a stream sync before the devArrMatch calls (lines 278, 298) to ensure the GPU operations complete before comparison.

💡 Suggested test extension
// After the PQ-only test, add a VQ-enabled test case:
if (params_.use_vq && static_cast<uint32_t>(n_samples_) >= params_.n_vq_centers) {
  params config_vq{params_.pq_bits, params_.pq_dim, true, true, params_.n_vq_centers};
  // ... similar test logic with vq_labels handling
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/tests/preprocessing/product_quantization.cu` around lines 243 - 302, The
testViewQuantizer currently only covers PQ-only path; add a VQ-enabled branch
that constructs a config_vq with use_vq=true (e.g., params
config_vq{params_.pq_bits, params_.pq_dim, true, true, params_.n_vq_centers})
guarded by the same preconditions (static_cast<uint32_t>(n_samples_) >=
params_.n_vq_centers) and call build(...) to create an owning_quant_vq and then
a view_quant_vq from its pq_code_book() and vq_code_book(); mirror the existing
transform/inverse_transform assertions but include any required vq_labels
handling when creating/using the VQ codebook views. Also insert a stream
synchronization (cudaStreamSynchronize(handle.get_stream()) or equivalent)
before each cuvs::devArrMatch call (both for codes and reconstructed arrays) to
ensure GPU work is complete before comparison.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp`:
- Around line 40-44: The dim() method currently returns 0 when vq_code_book() is
std::nullopt, which breaks PQ-only quantizers; update dim() (in this class) to
return the original vector dimension by computing it from the PQ codebook when
VQ is absent (e.g., return pq_len * pq_dim derived from the PQ codebook shape)
or alternatively read a stored explicit dimension field if present, and ensure
vpq_dataset::dim() will then reflect the correct value for PQ-only datasets;
reference vq_code_book(), pq_len, pq_dim and vpq_dataset::dim() when making the
change.

---

Outside diff comments:
In `@cpp/src/preprocessing/quantize/detail/pq.cuh`:
- Around line 441-487: The function make_pq_params_from_vpq is defined twice
(identical signatures) which causes multiple-definition linker errors; remove
the duplicate definition so there is exactly one implementation of
make_pq_params_from_vpq(const cuvs::neighbors::vpq_params&, const uint64_t) in
this header (delete one of the two identical blocks and keep the other),
ensuring any callers use that single definition and no other duplicate remains.

---

Nitpick comments:
In `@cpp/tests/preprocessing/product_quantization.cu`:
- Around line 243-302: The testViewQuantizer currently only covers PQ-only path;
add a VQ-enabled branch that constructs a config_vq with use_vq=true (e.g.,
params config_vq{params_.pq_bits, params_.pq_dim, true, true,
params_.n_vq_centers}) guarded by the same preconditions
(static_cast<uint32_t>(n_samples_) >= params_.n_vq_centers) and call build(...)
to create an owning_quant_vq and then a view_quant_vq from its pq_code_book()
and vq_code_book(); mirror the existing transform/inverse_transform assertions
but include any required vq_labels handling when creating/using the VQ codebook
views. Also insert a stream synchronization
(cudaStreamSynchronize(handle.get_stream()) or equivalent) before each
cuvs::devArrMatch call (both for codes and reconstructed arrays) to ensure GPU
work is complete before comparison.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: b8b631db-9abe-47f2-bbc5-bc68bef80356

📥 Commits

Reviewing files that changed from the base of the PR and between 7247688 and 12cd162.

📒 Files selected for processing (10)
  • c/src/preprocessing/quantize/pq.cpp
  • cpp/include/cuvs/preprocessing/quantize/pq.hpp
  • cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp
  • cpp/src/neighbors/detail/cagra/compute_distance_vpq.hpp
  • cpp/src/neighbors/detail/dataset_serialize.hpp
  • cpp/src/neighbors/detail/vpq_dataset.cuh
  • cpp/src/neighbors/scann/detail/scann_build.cuh
  • cpp/src/preprocessing/quantize/detail/pq.cuh
  • cpp/src/preprocessing/quantize/detail/vpq_dataset_impl.hpp
  • cpp/tests/preprocessing/product_quantization.cu
✅ Files skipped from review due to trivial changes (3)
  • cpp/src/neighbors/scann/detail/scann_build.cuh
  • cpp/src/neighbors/detail/vpq_dataset.cuh
  • cpp/src/preprocessing/quantize/detail/vpq_dataset_impl.hpp
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/src/neighbors/detail/cagra/compute_distance_vpq.hpp

Comment on lines +40 to +44
[[nodiscard]] virtual auto dim() const noexcept -> uint32_t
{
auto vq = vq_code_book();
return vq.has_value() ? vq->extent(1) : 0;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

dim() returns 0 when VQ is absent, breaking PQ-only quantizers.

When vq_code_book() returns std::nullopt (PQ-only mode), dim() returns 0. However, the original vector dimension is still needed for PQ-only operations and can be derived from pq_len * pq_dim. This will cause vpq_dataset::dim() (line 166) to incorrectly report 0 for PQ-only datasets.

Consider storing dim explicitly or computing it from PQ codebook shape when VQ is absent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp` around lines 40 -
44, The dim() method currently returns 0 when vq_code_book() is std::nullopt,
which breaks PQ-only quantizers; update dim() (in this class) to return the
original vector dimension by computing it from the PQ codebook when VQ is absent
(e.g., return pq_len * pq_dim derived from the PQ codebook shape) or
alternatively read a stored explicit dimension field if present, and ensure
vpq_dataset::dim() will then reflect the correct value for PQ-only datasets;
reference vq_code_book(), pq_len, pq_dim and vpq_dataset::dim() when making the
change.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/preprocessing/quantize/detail/pq.cuh (1)

505-527: ⚠️ Potential issue | 🔴 Critical

Remove the duplicated make_pq_params_from_vpq() definition at lines 505–527.

The function at lines 481–503 and lines 505–527 are identical. This redefinition will cause a compilation error. Keep one definition and remove the duplicate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/preprocessing/quantize/detail/pq.cuh` around lines 505 - 527, There
are two identical definitions of make_pq_params_from_vpq(const
cuvs::neighbors::vpq_params& in_params, const uint64_t n_rows) returning
cuvs::preprocessing::quantize::pq::params; remove the duplicate definition and
keep a single implementation (the one that computes pq_n_centers, adjusts
max_train_points_per_vq_cluster, and returns the pq::params initialized from
in_params and computed values) so the symbol is defined only once.
♻️ Duplicate comments (3)
cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp (2)

41-45: ⚠️ Potential issue | 🟠 Major

dim() still reports invalid metadata for PQ-only codebooks.

When vq_code_book() is absent, dim() returns 0, which then makes pq_dim() and vpq_dataset::dim() wrong for PQ-only quantizers even though this PR explicitly supports optional VQ. Logging the condition does not prevent downstream mis-sizing.

Also applies to: 69-73, 121-129, 175-175

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp` around lines 41 -
45, The dim() implementation returns 0 when vq_code_book() is absent, which
breaks PQ-only quantizers; change dim() to return vq->extent(1) when
vq_code_book() exists, otherwise return pq_dim() so PQ-only datasets size
correctly; apply the same fix to the other occurrences noted (the analogous
methods at the other ranges) and ensure vpq_dataset::dim() delegates to the
corrected dim() or uses the same vq-present ? vq->extent(1) : pq_dim() logic;
update references to vq_code_book(), pq_dim(), and vpq_dataset::dim()
accordingly.

94-128: ⚠️ Potential issue | 🟠 Major

Delete or initialize the default constructors.

vpq_codebooks() still leaves impl_ == nullptr, and every accessor dereferences it immediately. Because vpq_dataset() default-constructs codebooks, vpq_dataset::dim() can still crash on a default-constructed object.

Also applies to: 160-175

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp` around lines 94 -
128, Remove the unsafe default constructor for vpq_codebooks (delete
vpq_codebooks()) so impl_ cannot be left nullptr, and update callers (e.g.,
vpq_dataset which default-constructs codebooks) to supply a valid implementation
via the explicit vpq_codebooks(std::unique_ptr<...>) ctor; alternatively, if a
default-constructible sentinel is preferred, initialize impl_ to a concrete
"null-object" implementation that safely implements vq_code_book(),
pq_code_book(), and dim() (returning empty optional/zero) so methods
vq_code_book(), pq_code_book(), and dim() and callers like vpq_dataset::dim() no
longer dereference a null impl_.
cpp/src/preprocessing/quantize/detail/pq.cuh (1)

231-245: ⚠️ Potential issue | 🟠 Major

Also validate vq_centers.extent(1) in build_view().

Only the VQ row count is checked here. If the VQ centers have the wrong column count, transform() / inverse_transform() compute col from the PQ layout and then read vq_centers(vq_label, col) out of bounds.

As per coding guidelines, input validation must check for negative or invalid dimensions, null pointers, and invalid parameter combinations before GPU operations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/preprocessing/quantize/detail/pq.cuh` around lines 231 - 245, The
code checks only vq_centers.extent(0) but not the column count, so add a
RAFT_EXPECTS in build_view() to validate vq_centers.value().extent(1) matches
the PQ center column count and is >0 (e.g.,
RAFT_EXPECTS(vq_centers.value().extent(1) == pq_centers.extent(1) &&
vq_centers.value().extent(1) > 0, "..."), so that vpq_codebooks construction
(vpq_codebooks, vpq_codebooks_view, transform, inverse_transform) cannot read
out-of-bounds; keep the existing null-pointer and vq_n_centers checks and use
pq_centers.extent(1) (or the appropriate params dim) as the reference for the
expected column count.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/preprocessing/quantize/detail/pq.cuh`:
- Around line 210-229: The RAFT precondition only validates pq_centers.extent(0)
but misses checks for an empty second dimension and null data — update the
validation in the pq quantizer setup (the block that checks params.pq_bits,
params.pq_dim, use_subspaces and pq_centers extents) to also require
pq_centers.extent(1) > 0 and that pq_centers.data_handle() (or equivalent device
pointer accessor) is non-null/valid; keep the existing conditional shapes for
use_subspaces true/false and add these extra guards so transform(),
inverse_transform(), and reconstruct_vectors_kernel() cannot be invoked with an
empty or null pq_centers view.

---

Outside diff comments:
In `@cpp/src/preprocessing/quantize/detail/pq.cuh`:
- Around line 505-527: There are two identical definitions of
make_pq_params_from_vpq(const cuvs::neighbors::vpq_params& in_params, const
uint64_t n_rows) returning cuvs::preprocessing::quantize::pq::params; remove the
duplicate definition and keep a single implementation (the one that computes
pq_n_centers, adjusts max_train_points_per_vq_cluster, and returns the
pq::params initialized from in_params and computed values) so the symbol is
defined only once.

---

Duplicate comments:
In `@cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp`:
- Around line 41-45: The dim() implementation returns 0 when vq_code_book() is
absent, which breaks PQ-only quantizers; change dim() to return vq->extent(1)
when vq_code_book() exists, otherwise return pq_dim() so PQ-only datasets size
correctly; apply the same fix to the other occurrences noted (the analogous
methods at the other ranges) and ensure vpq_dataset::dim() delegates to the
corrected dim() or uses the same vq-present ? vq->extent(1) : pq_dim() logic;
update references to vq_code_book(), pq_dim(), and vpq_dataset::dim()
accordingly.
- Around line 94-128: Remove the unsafe default constructor for vpq_codebooks
(delete vpq_codebooks()) so impl_ cannot be left nullptr, and update callers
(e.g., vpq_dataset which default-constructs codebooks) to supply a valid
implementation via the explicit vpq_codebooks(std::unique_ptr<...>) ctor;
alternatively, if a default-constructible sentinel is preferred, initialize
impl_ to a concrete "null-object" implementation that safely implements
vq_code_book(), pq_code_book(), and dim() (returning empty optional/zero) so
methods vq_code_book(), pq_code_book(), and dim() and callers like
vpq_dataset::dim() no longer dereference a null impl_.

In `@cpp/src/preprocessing/quantize/detail/pq.cuh`:
- Around line 231-245: The code checks only vq_centers.extent(0) but not the
column count, so add a RAFT_EXPECTS in build_view() to validate
vq_centers.value().extent(1) matches the PQ center column count and is >0 (e.g.,
RAFT_EXPECTS(vq_centers.value().extent(1) == pq_centers.extent(1) &&
vq_centers.value().extent(1) > 0, "..."), so that vpq_codebooks construction
(vpq_codebooks, vpq_codebooks_view, transform, inverse_transform) cannot read
out-of-bounds; keep the existing null-pointer and vq_n_centers checks and use
pq_centers.extent(1) (or the appropriate params dim) as the reference for the
expected column count.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 15c75d87-6084-486c-a9f2-a0744e40a25b

📥 Commits

Reviewing files that changed from the base of the PR and between 12cd162 and 075c1ba.

📒 Files selected for processing (4)
  • cpp/include/cuvs/preprocessing/quantize/vpq_dataset.hpp
  • cpp/src/neighbors/detail/vpq_dataset.cuh
  • cpp/src/preprocessing/quantize/detail/pq.cuh
  • cpp/src/preprocessing/quantize/detail/vpq_dataset_impl.hpp
✅ Files skipped from review due to trivial changes (1)
  • cpp/src/neighbors/detail/vpq_dataset.cuh

Comment thread cpp/src/preprocessing/quantize/detail/pq.cuh
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
cpp/src/preprocessing/quantize/detail/pq.cuh (1)

217-245: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Tighten build_view() metadata validation for external codebooks.

This still accepts malformed views: pq_centers can be zero-width or null, and vq_centers.extent(1) is never checked against the dimension implied by pq_centers. Both cases can turn into device-side out-of-bounds later in transform() / inverse_transform().

Suggested guard tightening
   RAFT_EXPECTS(params.pq_bits >= 4 && params.pq_bits <= 16,
                "PQ bits must be within [4, 16], got %u",
                params.pq_bits);
   RAFT_EXPECTS(params.pq_dim > 0, "pq_dim must be specified for view-type quantizer");
+  RAFT_EXPECTS(pq_centers.data_handle() != nullptr,
+               "pq_centers data pointer must be non-null");
+  RAFT_EXPECTS(pq_centers.extent(1) > 0,
+               "pq_centers must have a non-zero second dimension");
 
   const uint32_t pq_n_centers = 1u << params.pq_bits;
+  const uint32_t dim =
+    params.use_subspaces ? params.pq_dim * pq_centers.extent(1) : pq_centers.extent(1);
 
   if (params.use_subspaces) {
     RAFT_EXPECTS(pq_centers.extent(0) == params.pq_dim * pq_n_centers,
                  "For use_subspaces=true, pq_centers must have shape [pq_dim * pq_n_centers, "
                  "pq_len], got [%u, %u]",
@@
   if (params.use_vq) {
     RAFT_EXPECTS(vq_centers.has_value(), "vq_centers must be provided when use_vq=true");
     RAFT_EXPECTS(params.vq_n_centers > 0,
                  "params.vq_n_centers must be > 0 when use_vq=true (got %u)",
                  params.vq_n_centers);
@@
     RAFT_EXPECTS(vq_centers.value().extent(0) == params.vq_n_centers,
                  "vq_centers must have shape [vq_n_centers, dim] (vq_n_centers=%u), got "
                  "extent(0)=%u",
                  params.vq_n_centers,
                  vq_centers.value().extent(0));
+    RAFT_EXPECTS(vq_centers.value().extent(1) == dim,
+                 "vq_centers must have %u columns, got %u",
+                 dim,
+                 vq_centers.value().extent(1));

As per coding guidelines, input validation must check for negative or invalid dimensions, null pointers, and invalid parameter combinations before GPU operations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/preprocessing/quantize/detail/pq.cuh` around lines 217 - 245, Tighten
build_view() validation: ensure params.pq_dim, params.pq_len, and pq_n_centers
are >0 and that pq_centers.data_handle() is non-null and pq_centers.extent(1)
(the vector length) >0; when params.use_subspaces==true also verify
pq_centers.extent(0)>0 and equals params.pq_dim * pq_n_centers and that
pq_centers.extent(1) == params.pq_len; when params.use_subspaces==false verify
pq_centers.extent(0)>0 equals pq_n_centers and
pq_centers.extent(1)==params.pq_len; for use_vq==true, additionally check
vq_centers.data_handle()!=nullptr, vq_centers.extent(0)>0, and
vq_centers.extent(1)==expected_dim (the same dim implied by pq_centers/params)
before constructing vpq_codebooks_view so transform()/inverse_transform() cannot
see zero-width or mismatched-dimension views.
🧹 Nitpick comments (1)
cpp/tests/preprocessing/product_quantization.cu (1)

248-309: ⚡ Quick win

Exercise the VQ-enabled view quantizer path too.

This helper hard-codes use_subspaces=true and use_vq=false, so TEST_P(..., ViewQuantizer) never covers the VQ branch or the non-subspace shape contract of the new overload. That leaves most of the new build(..., pq_centers, vq_centers) validation and label-handling path untested.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/tests/preprocessing/product_quantization.cu` around lines 248 - 309, The
test only exercises the subspace PQ path (config.use_subspaces=true,
use_vq=false) so add a second branch that constructs a VQ-enabled non-subspace
config (set params.use_subspaces=false and params.use_vq=true, ensure
params.n_vq_centers is valid for n_samples_), then call build(handle, config,
raft::make_const_mdspan(dataset_.view())) to get owning_quant, create view_quant
via build(handle, owning_quant.params_quantizer, pq_centers_view,
vq_centers_view), run transform(...) and inverse_transform(...) on both
owning_quant and view_quant, and compare codes and reconstructions with
cuvs::devArrMatch just like the existing assertions to exercise the VQ code
paths and shape/label validation in build(..., pq_centers, vq_centers).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/preprocessing/quantize/detail/pq.cuh`:
- Around line 418-452: inverse_transform() must validate that out.extent(1)
matches the dimension implied by the PQ codebook before calling
reconstruct_vectors<T,...>(); add a RAFT_EXPECTS that out.extent(1) ==
get_quantized_dim(quantizer.params_quantizer) (and > 0) so
reconstruct_vectors_kernel() cannot recompute pq_len from an oversize out and
read past packed codes; perform this check alongside the existing checks (near
the vq_labels/ codes preflight) and include a clear error message referencing
out.extent(1) and the expected quantized dimension.

---

Duplicate comments:
In `@cpp/src/preprocessing/quantize/detail/pq.cuh`:
- Around line 217-245: Tighten build_view() validation: ensure params.pq_dim,
params.pq_len, and pq_n_centers are >0 and that pq_centers.data_handle() is
non-null and pq_centers.extent(1) (the vector length) >0; when
params.use_subspaces==true also verify pq_centers.extent(0)>0 and equals
params.pq_dim * pq_n_centers and that pq_centers.extent(1) == params.pq_len;
when params.use_subspaces==false verify pq_centers.extent(0)>0 equals
pq_n_centers and pq_centers.extent(1)==params.pq_len; for use_vq==true,
additionally check vq_centers.data_handle()!=nullptr, vq_centers.extent(0)>0,
and vq_centers.extent(1)==expected_dim (the same dim implied by
pq_centers/params) before constructing vpq_codebooks_view so
transform()/inverse_transform() cannot see zero-width or mismatched-dimension
views.

---

Nitpick comments:
In `@cpp/tests/preprocessing/product_quantization.cu`:
- Around line 248-309: The test only exercises the subspace PQ path
(config.use_subspaces=true, use_vq=false) so add a second branch that constructs
a VQ-enabled non-subspace config (set params.use_subspaces=false and
params.use_vq=true, ensure params.n_vq_centers is valid for n_samples_), then
call build(handle, config, raft::make_const_mdspan(dataset_.view())) to get
owning_quant, create view_quant via build(handle, owning_quant.params_quantizer,
pq_centers_view, vq_centers_view), run transform(...) and inverse_transform(...)
on both owning_quant and view_quant, and compare codes and reconstructions with
cuvs::devArrMatch just like the existing assertions to exercise the VQ code
paths and shape/label validation in build(..., pq_centers, vq_centers).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: d4dbc21e-bd6b-40d6-b25f-01007008ad7e

📥 Commits

Reviewing files that changed from the base of the PR and between 075c1ba and 740b78e.

📒 Files selected for processing (2)
  • cpp/src/preprocessing/quantize/detail/pq.cuh
  • cpp/tests/preprocessing/product_quantization.cu

Comment on lines +418 to +452
RAFT_EXPECTS(codes.extent(1) == get_quantized_dim(quantizer.params_quantizer),
"Codes matrix doesn't have the correct number of columns");
RAFT_EXPECTS(quant.params_quantizer.pq_bits >= 4 && quant.params_quantizer.pq_bits <= 16,
RAFT_EXPECTS(quantizer.params_quantizer.pq_bits >= 4 && quantizer.params_quantizer.pq_bits <= 16,
"PQ bits must be within [4, 16]");
reconstruct_vectors<T, T, idx_t, label_t>(
res,
quant.params_quantizer,
codes,
raft::make_const_mdspan(quant.vpq_codebooks.pq_code_book.view()),
raft::make_const_mdspan(quant.vpq_codebooks.vq_code_book.view()),
vq_labels,
out,
quant.params_quantizer.use_subspaces);

// Honor params.use_vq strictly (see the matching block in transform()).
auto vq_centers_opt = quantizer.codebooks.vq_code_book();
RAFT_EXPECTS(!quantizer.params_quantizer.use_vq || vq_centers_opt.has_value(),
"Quantizer has params.use_vq=true but no VQ codebook");
auto vq_centers =
quantizer.params_quantizer.use_vq
? vq_centers_opt.value()
: raft::make_device_matrix_view<const T, uint32_t, raft::row_major>(nullptr, 0, 0);

// VQ-label preflight: when use_vq is true the kernel reads vq_labels(row) per
// row and falls back to label 0 when vq_labels is absent. Without this check every vector can be
// reconstructed with vq_label 0.
if (quantizer.params_quantizer.use_vq) {
RAFT_EXPECTS(vq_labels.has_value(),
"When params.use_vq is true, vq_labels must be provided to inverse_transform()");
RAFT_EXPECTS(vq_labels.value().extent(0) == codes.extent(0),
"When params.use_vq is true, vq_labels must have the same number of rows as "
"codes (got %zu vs %zu)",
size_t(vq_labels.value().extent(0)),
size_t(codes.extent(0)));
}

reconstruct_vectors<T, T, idx_t, label_t>(res,
quantizer.params_quantizer,
codes,
quantizer.codebooks.pq_code_book(),
vq_centers,
vq_labels,
out,
quantizer.params_quantizer.use_subspaces);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Validate the reconstruction width before launching the kernel.

inverse_transform() never checks that out.extent(1) matches the dimension implied by the PQ codebook. In reconstruct_vectors_kernel(), pq_len is recomputed from out.extent(1) and then used to index the packed codes; an oversized out can make code_view[j] read past the encoded row.

Suggested preflight
   auto vq_centers =
     quantizer.params_quantizer.use_vq
       ? vq_centers_opt.value()
       : raft::make_device_matrix_view<const T, uint32_t, raft::row_major>(nullptr, 0, 0);
+  auto pq_centers = quantizer.codebooks.pq_code_book();
+  const auto expected_dim =
+    quantizer.params_quantizer.use_subspaces
+      ? static_cast<idx_t>(quantizer.params_quantizer.pq_dim) *
+          static_cast<idx_t>(pq_centers.extent(1))
+      : static_cast<idx_t>(pq_centers.extent(1));
+  RAFT_EXPECTS(out.extent(1) == expected_dim,
+               "Output matrix must have %lld columns, got %lld",
+               static_cast<long long>(expected_dim),
+               static_cast<long long>(out.extent(1)));
 
   // VQ-label preflight: when use_vq is true the kernel reads vq_labels(row) per
   // row and falls back to label 0 when vq_labels is absent. Without this check every vector can be
   // reconstructed with vq_label 0.
@@
   reconstruct_vectors<T, T, idx_t, label_t>(res,
                                             quantizer.params_quantizer,
                                             codes,
-                                            quantizer.codebooks.pq_code_book(),
+                                            pq_centers,
                                             vq_centers,
                                             vq_labels,
                                             out,
                                             quantizer.params_quantizer.use_subspaces);

As per coding guidelines, input validation must check for negative or invalid dimensions, null pointers, and invalid parameter combinations before GPU operations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/preprocessing/quantize/detail/pq.cuh` around lines 418 - 452,
inverse_transform() must validate that out.extent(1) matches the dimension
implied by the PQ codebook before calling reconstruct_vectors<T,...>(); add a
RAFT_EXPECTS that out.extent(1) == get_quantized_dim(quantizer.params_quantizer)
(and > 0) so reconstruct_vectors_kernel() cannot recompute pq_len from an
oversize out and read past packed codes; perform this check alongside the
existing checks (near the vq_labels/ codes preflight) and include a clear error
message referencing out.extent(1) and the expected quantized dimension.

@tarang-jain tarang-jain requested a review from a team as a code owner May 1, 2026 00:21
@tarang-jain tarang-jain force-pushed the view-pq-quantizer branch from 703019c to 740b78e Compare May 1, 2026 00:38
Comment on lines +124 to +126
RAFT_LOG_WARN(
"vpq_codebooks::dim() returns 0 when no VQ codebook is present; "
"the original vector dimension cannot be recovered from PQ codebooks alone.");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be computed by pq_dim * pq_len: https://github.com/rapidsai/cuvs/pull/1764/changes/cc2a9005587a340de1018771dd9f66d4282f2498..901c8bbf92fd7af71e65f86709d63f3f3f3d6325#r3170234803.
It might be better to do that and have a value that makes sense rather than returning 0.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 4, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

breaking Introduces a breaking change feature request New feature or request

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

4 participants