Remove redundant namespace directives and consolidate error handling#53
Remove redundant namespace directives and consolidate error handling#53
Conversation
Se eliminan múltiples declaraciones redundantes de 'using namespace Aleph' en archivos de cabecera que ya están dentro del namespace Aleph. Esto evita conflictos de nombres y mejora la claridad del código. Adicionalmente, en parse_utils.H y stat_utils.H se reemplazan las excepciones estándar por macros de error específicas de Aleph (ah_parse_error, ah_invalid_argument_if, etc.) para un manejo de errores más consistente en la biblioteca.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds macro-based parsing/validation errors and stream-aware exception throwing; refactors/normalizes namespaces, result types, and helpers across many algorithm headers; strengthens numeric/overflow checks and data-structure choices; replaces priority queue with DynBinHeap in min-cost flow; and adds CI header-change tokenization with unit tests. Changes
Sequence Diagram(s)sequenceDiagram
participant CI as CI Runner
participant Git as Git
participant Script as ci_policy_checks.rb
participant Tests as Test Suite
CI->>Git: git diff --name-status base...head
Git-->>Script: name-status entries
Script->>Git: git show base:old_path (optional)
Git-->>Script: base-file content
Script->>Script: cpp_token_stream(current_file) / cpp_token_stream(base_file)
alt token streams differ
Script->>CI: mark header as code-changed
else token streams same
Script->>CI: mark header as doc-only
end
Script->>CI: compute headers_requiring_tests count
CI->>Tests: run regression tests if pull_request && headers_requiring_tests > 0
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9c424ec8ed
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 33 out of 42 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
tpl_mincost.H:354
- This uses
std::vectorfornode_info, but other network algorithms store per-node cookie info in anArray<...>::create(net.vsize())(e.g., tpl_maxflow.H:474-478). UsingArrayhere avoids an STL dependency in a public header and matches the established pattern while still keeping element addresses stable.
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
tpl_mincost.H (4)
261-357:⚠️ Potential issue | 🟠 MajorMust fix: Bellman-Ford initialization never reports negative-cost cycles.
Lines 261-270 promise negative-cycle detection, but Line 334 hardcodes success and Line 357 proceeds as if the potentials are valid. A reachable negative-cost residual cycle will therefore be accepted as a legal SSP start state, which can produce a bogus minimum cost.
🐛 Suggested fix
@@ for (size_t i = 0; i < n - 1; ++i) { bool changed = false; @@ if (not changed) break; } + + for (Arc_Iterator<Net> it(net); it.has_curr(); it.next_ne()) + { + auto arc = it.get_curr(); + auto src = net.get_src_node(arc); + auto tgt = net.get_tgt_node(arc); + + if (arc->cap > arc->flow) + { + auto d = ssp_info<Net>(src).potential; + if (d < INF and d + arc->cost < ssp_info<Net>(tgt).potential) + return false; + } + + if (arc->flow > Flow_Type{0}) + { + auto d = ssp_info<Net>(tgt).potential; + if (d < INF and d - arc->cost < ssp_info<Net>(src).potential) + return false; + } + } @@ - ssp_init_potentials(net, source); + ah_domain_error_if(not ssp_init_potentials(net, source)) + << "Residual network contains a negative-cost cycle";Based on learnings: "Check for negative cycle handling in shortest path algorithms".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 261 - 357, ssp_init_potentials currently always returns true and never detects/report negative-cost residual cycles; modify ssp_init_potentials to detect negative cycles by doing one extra relaxation pass after the n-1 iterations and return false if any potential can still decrease (indicating a reachable negative cycle), and update successive_shortest_paths to check the boolean result of ssp_init_potentials(net, source) and handle failure (e.g., propagate an error via ah_domain_error or return an error pair) instead of proceeding as if potentials are valid; reference ssp_init_potentials and successive_shortest_paths to locate where to add the extra relaxation check and the result handling.
523-551:⚠️ Potential issue | 🟠 MajorMust fix:
solve_transshipment()loses the basic exception guarantee.After Line 523 mutates
net, any failure in the secondinsert_node(), one of theinsert_arc()calls, orsuccessive_shortest_paths()skips Lines 549-551 and leaves temporary super nodes/arcs in the caller graph. Please put that cleanup behind an RAII guard so every exit path restores the original topology.As per coding guidelines: "All resources (memory, file handles, locks) must follow RAII principles" and "Use RAII to ensure cleanup on exception".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 523 - 551, The function mutates net by inserting super_source/super_sink and arcs but removes them only at the end, so exceptions (from net.insert_node, net.insert_arc, or successive_shortest_paths) leave the caller graph corrupted; fix by introducing an RAII scope-guard class (e.g., SuperNodeGuard) constructed after creating super_source and super_sink that holds Net& and the two Node* and in its destructor calls net.remove_node(super_source) and net.remove_node(super_sink) (and/or removes any arcs it created), and provide a release() method invoked just before the current explicit removals to prevent double-removal; update solve_transshipment to create this guard immediately after insert_node() calls and let it automatically clean up on all exit paths, leaving the final explicit net.remove_node calls replaced by guard.release().
741-791:⚠️ Potential issue | 🔴 CriticalMust fix: validate transportation inputs before the build loop.
Lines 741-745 currently mark any one-sided-empty instance as feasible, so
{supplies = {1}, demands = {}}is reported as solvable. Separately, Line 791 indexescosts[i][j]without first proving thatcosts.size() == mand every row hasnentries, which is UB on malformed matrices. Please reject these cases up front with Aleph invalid-argument/domain-error macros.As per coding guidelines: "Validate external inputs before processing" and "Use
ah_invalid_argument_if(condition)macro for invalid arguments".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 741 - 791, The code currently treats m==0 or n==0 as feasible and assumes costs is well-formed; instead, validate inputs up front and reject malformed cases using ah_invalid_argument_if. Specifically, in the function (tpl_mincost / Net build block) replace the early "if (m == 0 or n == 0) { result.feasible = true; ... }" with input checks that call ah_invalid_argument_if for: m == 0 || n == 0 (reject one-sided-empty instances), supplies.size() != m, demands.size() != n, costs.size() != m, and for any i, costs[i].size() != n; keep all these validations before creating the Net, Nodes, or indexing costs[i][j]. Ensure each ah_invalid_argument_if has a clear condition so malformed matrices or mismatched supply/demand lengths are rejected immediately.
503-520:⚠️ Potential issue | 🟡 MinorShould fix: avoid exact balance checks for floating-point quantities.
Lines 516, 547, 755, and 798 compare accumulated quantities with exact
==/!=, but the public examples in this header instantiate these APIs withdouble. Inputs like0.1 + 0.2versus0.3can be rejected as infeasible even though the model is balanced. Keep exact comparisons for integral instantiations, but use a tolerance whenFlow_Type/Cost_Typeis floating-point.As per coding guidelines: "Consider numerical stability for floating-point arithmetic".
Also applies to: 545-547, 747-759, 798-799
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 503 - 520, The exact equality check between accumulated Flow_Type quantities (e.g., Flow_Type total_supply / total_demand and the if (total_supply != total_demand) branch around get_demand and result.feasible) must be replaced with a tolerance-based comparison when Flow_Type is floating point: detect floating types via std::is_floating_point<Flow_Type>::value and compute a small tolerance (for example based on std::numeric_limits<Flow_Type>::epsilon() scaled by max(1, abs(total_supply), abs(total_demand))) then treat the model balanced when std::abs(total_supply - total_demand) <= tol; keep the exact ==/!= behavior when Flow_Type is an integral type. Apply the same pattern to any similar comparisons involving Cost_Type or other accumulated floating quantities mentioned in the review (the other equality checks referenced) so all floating-point accumulations use tolerant comparisons while integral instantiations remain exact.
🧹 Nitpick comments (3)
tpl_bipartite.H (2)
111-132: Variablepshadows outer declaration.At line 114, the variable
pshadows thepdeclared at line 106. While not a functional bug (the outerpis no longer needed), shadowing can cause confusion during maintenance.♻️ Rename inner variable for clarity
while (true) if (not red.is_empty()) // any red node with unvisited arcs? { - typename GT::Node *p = red.get(); - for (Node_Arc_Iterator<GT, SA> it(p); it.has_curr(); it.next_ne()) + typename GT::Node *curr = red.get(); + for (Node_Arc_Iterator<GT, SA> it(curr); it.has_curr(); it.next_ne()) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_bipartite.H` around lines 111 - 132, The inner loop declares "typename GT::Node *p = red.get();" which shadows the outer "p" declared earlier; rename the inner variable (e.g., to curr or p_red) and update all uses inside the for-loop (the Node_Arc_Iterator<GT, SA> constructor and later references to p) so the outer "p" name is preserved and no shadowing occurs, keeping all logic identical.
67-77: Consider removingstaticfrom template functions.The
statickeyword on template functions is redundant—templates already have implicit weak linkage. Usinginlineor omitting the specifier entirely is more idiomatic in modern C++.♻️ Optional: Remove redundant `static` specifier
-template <class GT> -static long &color(typename GT::Node *p) +template <class GT> +inline long &color(typename GT::Node *p) { return NODE_COUNTER(p); } -template <class GT> -static long &color(typename GT::Arc *a) +template <class GT> +inline long &color(typename GT::Arc *a) { return ARC_COUNTER(a); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_bipartite.H` around lines 67 - 77, The two template functions 'template <class GT> static long &color(typename GT::Node *p)' and 'template <class GT> static long &color(typename GT::Arc *a)' should drop the redundant 'static' specifier (or be marked 'inline' if you prefer) because template functions already have appropriate linkage; locate the color(typename GT::Node*) and color(typename GT::Arc*) definitions and remove the 'static' keyword from each declaration/definition so they follow modern C++ idioms and avoid unnecessary storage-class specifiers.Matrix_Chain.H (1)
138-156: Duplicate zero-matrix initialization can be extracted.The
dpandsplitinitialization blocks are structurally identical. Extracting a small helper (e.g.,make_zero_matrix(n)) reduces duplication and future drift.As per coding guidelines: "Refactor duplicated code into reusable functions or templates."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Matrix_Chain.H` around lines 138 - 156, The dp and split zero-matrix initializations are duplicated; create a helper function (e.g., make_zero_matrix(size_t n) returning Array<Array<size_t>>) and replace both blocks with calls to that helper. Implement make_zero_matrix to allocate outer Array, reserve(n), loop i in [0,n) creating Array<size_t> row = Array<size_t>::create(n), zero each entry (row(j)=0) and append(std::move(row)); then use dp = make_zero_matrix(n) and split = make_zero_matrix(n) in place of the repeated code. Ensure helper is declared/defined where Array and size_t are visible.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AUDIT_ADVERSARIAL_BRANCH.md`:
- Around line 424-464: Summary: appendix entries use workstation-specific
absolute paths; make them repo-relative. Edit AUDIT_ADVERSARIAL_BRANCH.md and
replace the absolute prefixes on the listed entries (the header filenames like
state_search_common.H, Backtracking.H, Branch_And_Bound.H,
State_Search_IDA_Star.H, Negamax.H, Alpha_Beta.H, search_move_ordering.H,
Transposition_Table.H, State_Search.H and the Tests, Examples and docs
filenames) so they point to files relative to the repository root (i.e., drop
the /home/... workstation prefix), and update any CMakeLists references likewise
so all entries are portable and grep/copy-friendly.
In `@docs/aleph_library_improvements.md`:
- Around line 457-484: The fenced code block containing the phased plan
currently has no language label which triggers markdownlint MD040; update the
opening fence to include a language (e.g., add "text" or "markdown" after the
triple backticks) so the block is labeled and MD040 is resolved—locate the
fenced block shown (the multi-line phase list) and change ``` to ```text (or
```markdown).
- Around line 169-176: Update the document to stop treating raw "throw" sites as
acceptable: remove the carve-out that exempts thread_pool.H and instead list
gmpfrxx.h as third-party (leave), and mark parse_utils.H, stat_utils.H,
thread_pool.H, and ah-signal.H as remaining migration work; rephrase the
guidance to explicitly state "ALWAYS use Aleph exception macros from ah-errors.H
instead of raw throw statements" and ensure the paragraph replaces any wording
that implies raw throws are acceptable.
In `@docs/state_search_improvements.md`:
- Around line 300-324: The fenced code block containing the execution-order
diagram (the block starting with "Fase E (consolidación)" and ending with "I3
← Independiente") is unlabeled and triggers MD040; fix it by adding a language
label (`text`) after the opening triple backticks so the fence becomes ```text,
ensuring the diagram rendering is unchanged and markdownlint no longer flags the
block.
In `@Matrix_Chain.H`:
- Around line 121-123: Update the `@throws` doc for the Matrix_Chain
constructor/function to reflect that dims is Array<size_t> and cannot be
negative: change the line that currently reads "if any dimension is zero or
negative" to state only "if any dimension is zero" (e.g., "@throws
ah_domain_error if any dimension is zero"), keeping the other `@throws` entries
(`ah_domain_error` for fewer than 2 entries and `ah_runtime_error` for overflow)
unchanged and referencing the `dims` parameter.
- Around line 186-202: The documentation for matrix_chain_min_cost is
inconsistent with its implementation: it claims "without reconstruction" but
actually calls matrix_chain_order which builds split/parenthesization; fix this
by updating the Doxygen for matrix_chain_min_cost to state that it delegates to
matrix_chain_order and returns the min_multiplications computed there (or
alternatively implement a dedicated cost-only DP path); reference the function
names matrix_chain_min_cost and matrix_chain_order and ensure the
throws/complexity notes remain accurate after the change.
In `@scripts/ci_policy_checks.rb`:
- Around line 36-44: The git-diff parsing currently only treats rename records
as two-path entries; update the conditional in the block that maps out.lines
(the mapping handling status = parts[0]) to also detect copy records by checking
status.start_with?('R', 'C') (or equivalent), and when either R or C is true set
{ status: status[0] (or status), old_path: parts[1], path: parts[2] } so copied
files use parts[2] as the destination path instead of parts[1].
- Around line 77-99: read_raw_string_literal currently only accepts raw literals
that start exactly with 'R"', but C++ allows optional encoding prefixes (u8, u,
U, L) before the R; update the function to detect and accept those prefixes so
the parsed token begins at start_idx even when the sequence is like "u8R\"",
"uR\"", "UR\"", or "LR\"". Concretely: before the existing check for 'R',
examine content[start_idx] and the next chars to see if they form an allowed
prefix (if content[start_idx] is 'u' allow an optional '8' then require 'R'; if
'U' or 'L' require 'R'; or if it's already 'R' proceed as before), compute the
index of the 'R' (r_index) and use r_index in place of start_idx when setting
delimiter_start/delimiter_end/terminator/literal_end so you return the full
slice content[start_idx...literal_end] and correct literal_end; ensure the
returned slice still begins at start_idx (including the prefix) and that
delimiter/terminator logic remains unchanged using the located 'R'.
In `@scripts/test_ci_policy_checks.rb`:
- Around line 7-143: The test suite lacks coverage for new branches: add tests
exercising the structured changed_files parsing and the raw-string path handling
in cpp_token_stream in addition to existing header_has_code_changes? cases;
specifically add a test that supplies a changed_files-like structure (e.g.,
multiple entries/renames) to exercise any logic that reads structured change
metadata, and add a test calling cpp_token_stream with a C++ raw-string literal
(R"(...)") path/content to ensure raw-string tokens are preserved, while keeping
existing header_has_code_changes? scenarios for rename/no-change and
new-header-doc-only; locate and update tests around cpp_token_stream and
header_has_code_changes? to include these new cases so the new parser branches
are exercised.
---
Outside diff comments:
In `@tpl_mincost.H`:
- Around line 261-357: ssp_init_potentials currently always returns true and
never detects/report negative-cost residual cycles; modify ssp_init_potentials
to detect negative cycles by doing one extra relaxation pass after the n-1
iterations and return false if any potential can still decrease (indicating a
reachable negative cycle), and update successive_shortest_paths to check the
boolean result of ssp_init_potentials(net, source) and handle failure (e.g.,
propagate an error via ah_domain_error or return an error pair) instead of
proceeding as if potentials are valid; reference ssp_init_potentials and
successive_shortest_paths to locate where to add the extra relaxation check and
the result handling.
- Around line 523-551: The function mutates net by inserting
super_source/super_sink and arcs but removes them only at the end, so exceptions
(from net.insert_node, net.insert_arc, or successive_shortest_paths) leave the
caller graph corrupted; fix by introducing an RAII scope-guard class (e.g.,
SuperNodeGuard) constructed after creating super_source and super_sink that
holds Net& and the two Node* and in its destructor calls
net.remove_node(super_source) and net.remove_node(super_sink) (and/or removes
any arcs it created), and provide a release() method invoked just before the
current explicit removals to prevent double-removal; update solve_transshipment
to create this guard immediately after insert_node() calls and let it
automatically clean up on all exit paths, leaving the final explicit
net.remove_node calls replaced by guard.release().
- Around line 741-791: The code currently treats m==0 or n==0 as feasible and
assumes costs is well-formed; instead, validate inputs up front and reject
malformed cases using ah_invalid_argument_if. Specifically, in the function
(tpl_mincost / Net build block) replace the early "if (m == 0 or n == 0) {
result.feasible = true; ... }" with input checks that call
ah_invalid_argument_if for: m == 0 || n == 0 (reject one-sided-empty instances),
supplies.size() != m, demands.size() != n, costs.size() != m, and for any i,
costs[i].size() != n; keep all these validations before creating the Net, Nodes,
or indexing costs[i][j]. Ensure each ah_invalid_argument_if has a clear
condition so malformed matrices or mismatched supply/demand lengths are rejected
immediately.
- Around line 503-520: The exact equality check between accumulated Flow_Type
quantities (e.g., Flow_Type total_supply / total_demand and the if (total_supply
!= total_demand) branch around get_demand and result.feasible) must be replaced
with a tolerance-based comparison when Flow_Type is floating point: detect
floating types via std::is_floating_point<Flow_Type>::value and compute a small
tolerance (for example based on std::numeric_limits<Flow_Type>::epsilon() scaled
by max(1, abs(total_supply), abs(total_demand))) then treat the model balanced
when std::abs(total_supply - total_demand) <= tol; keep the exact ==/!= behavior
when Flow_Type is an integral type. Apply the same pattern to any similar
comparisons involving Cost_Type or other accumulated floating quantities
mentioned in the review (the other equality checks referenced) so all
floating-point accumulations use tolerant comparisons while integral
instantiations remain exact.
---
Nitpick comments:
In `@Matrix_Chain.H`:
- Around line 138-156: The dp and split zero-matrix initializations are
duplicated; create a helper function (e.g., make_zero_matrix(size_t n) returning
Array<Array<size_t>>) and replace both blocks with calls to that helper.
Implement make_zero_matrix to allocate outer Array, reserve(n), loop i in [0,n)
creating Array<size_t> row = Array<size_t>::create(n), zero each entry
(row(j)=0) and append(std::move(row)); then use dp = make_zero_matrix(n) and
split = make_zero_matrix(n) in place of the repeated code. Ensure helper is
declared/defined where Array and size_t are visible.
In `@tpl_bipartite.H`:
- Around line 111-132: The inner loop declares "typename GT::Node *p =
red.get();" which shadows the outer "p" declared earlier; rename the inner
variable (e.g., to curr or p_red) and update all uses inside the for-loop (the
Node_Arc_Iterator<GT, SA> constructor and later references to p) so the outer
"p" name is preserved and no shadowing occurs, keeping all logic identical.
- Around line 67-77: The two template functions 'template <class GT> static long
&color(typename GT::Node *p)' and 'template <class GT> static long
&color(typename GT::Arc *a)' should drop the redundant 'static' specifier (or be
marked 'inline' if you prefer) because template functions already have
appropriate linkage; locate the color(typename GT::Node*) and color(typename
GT::Arc*) definitions and remove the 'static' keyword from each
declaration/definition so they follow modern C++ idioms and avoid unnecessary
storage-class specifiers.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 46de05ce-af37-4b44-b06c-6dea838e4b5c
📒 Files selected for processing (35)
AUDIT_ADVERSARIAL_BRANCH.mdBlossom.HBlossom_Weighted.HGraph_Coloring.HHLD.HHungarian.HK_Shortest_Paths.HLCA.HMatrix_Chain.HMin_Cost_Matching.HMin_Mean_Cycle.HPlanarity_Test.HSubset_Sum.HTests/ah-comb.ccTests/ah_date_test.ccTests/bipartite_test.ccTests/hungarian_test.ccTests/knapsack_test.ccTests/parse_utils_test.ccTests/planarity_test.ccTests/stat_utils_test.ccTests/subset_sum_test.ccTests/test_ah_errors.ccTests/tpl_mincost_test.ccah-comb.Hblossom_weighted_mwmatching.Hdocs/aleph_library_improvements.mddocs/state_search_improvements.mdmodular_combinatorics.Hscripts/ci_policy_checks.rbscripts/test_ci_policy_checks.rbtpl_bipartite.Htpl_mincost.Htpl_multi_polynomial.Htpl_polynomial.H
✅ Files skipped from review due to trivial changes (23)
- Tests/ah_date_test.cc
- Tests/parse_utils_test.cc
- Tests/tpl_mincost_test.cc
- Tests/ah-comb.cc
- Tests/hungarian_test.cc
- Tests/stat_utils_test.cc
- Tests/knapsack_test.cc
- Tests/subset_sum_test.cc
- Tests/planarity_test.cc
- K_Shortest_Paths.H
- Tests/bipartite_test.cc
- Min_Cost_Matching.H
- Blossom_Weighted.H
- Min_Mean_Cycle.H
- HLD.H
- blossom_weighted_mwmatching.H
- tpl_polynomial.H
- Graph_Coloring.H
- Blossom.H
- tpl_multi_polynomial.H
- modular_combinatorics.H
- Hungarian.H
- ah-comb.H
🚧 Files skipped from review as they are similar to previous changes (3)
- Tests/test_ah_errors.cc
- LCA.H
- Subset_Sum.H
- En tpl_bipartite.H se agrega verificación para grafo vacío en matching - En Knapsack.H se corrige verificación de overflow para tipos sin límites - En tpl_mincost.H se simplifica declaración de cola de prioridad - En Tests/bipartite_test.cc se actualizan pruebas para reflejar nuevo comportamiento - En stat_utils.H se reemplaza std::map por DynMapTree para consistencia - En scripts/ci_policy_checks.rb se añade soporte para archivos copiados - En ah-chronos-utils.H se corrige formato y comentarios - En scripts/test_ci_policy_checks.rb se añaden pruebas para literales raw strings - En Matrix_Chain.H se clarifica documentación de validación
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (8)
tpl_mincost.H (3)
669-682: Consider using a reverse lookup map for task indices.The inner loop at lines 676-681 searches linearly through all tasks to find the matching index. For large problems, consider storing a
DynMapTree<Node*, size_t>mapping task nodes to their indices during construction.This is a minor optimization; the current O(n²) extraction is acceptable for typical assignment problem sizes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 669 - 682, The extraction loop currently does a linear search over tasks inside the arc iteration (in the block using workers, Net::Node_Arc_Iterator, tasks and result.assignments), causing O(n²) behavior; create a reverse lookup map (e.g., DynMapTree<Node*, size_t> taskIndex) when tasks are constructed, use taskIndex[tgt] to get j instead of the inner for-loop, and only append to result.assignments when flow>0 and the lookup succeeds; update any construction code to populate taskIndex with node->index pairs.
130-135: Consider removing the unusedfromparameter.The
fromparameter is explicitly silenced but never used. If it's not needed for API consistency with other functions, consider removing it to simplify the interface.♻️ Proposed simplification
template <class Net> -typename Net::Flow_Type reduced_cost(const Net &net, - typename Net::Arc *arc, - typename Net::Node *from, - bool forward) +typename Net::Flow_Type reduced_cost(const Net &net, + typename Net::Arc *arc, + bool forward) { - (void) from; using Flow_Type = typename Net::Flow_Type;Then update the call site at line 203:
- Flow_Type rc = reduced_cost<Net>(net, arc, u, forward); + Flow_Type rc = reduced_cost<Net>(net, arc, forward);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 130 - 135, The function reduced_cost in tpl_mincost.H currently takes an unused parameter Net::Node *from; remove this unused parameter from the reduced_cost declaration and definition (drop the Net::Node *from parameter and remove the "(void) from;" silencing), and update all call sites that pass the from argument to call reduced_cost(net, arc, forward) (i.e., remove the extra argument so the signatures match). Ensure the function prototype and any forward declarations use the new signature and rebuild to catch remaining callers.
325-326: Usenotinstead of!for consistency.The coding guidelines require using
notfor logical negation. Line 182 correctly usesnot pq.is_empty(), but this line uses!changed.♻️ Proposed fix
- if (!changed) + if (not changed) break;As per coding guidelines: "Use and, or, not for logical operations (instead of &&, ||, !)".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 325 - 326, Replace the C-style negation with the guideline-preferred Python-style boolean operator: change the condition that currently uses "!changed" to use "not changed" so it matches the existing use of "not pq.is_empty()" and follows the "Use and, or, not" rule; locate the if statement that checks the boolean variable changed and update the negation accordingly.Matrix_Chain.H (1)
186-202:@throwsdocumentation is incomplete for delegated errors.The function delegates to
matrix_chain_order, which throwsah_domain_errorfor zero dimensions (line 131). This case should be documented here as well for completeness.📝 Suggested documentation fix
* `@throws` ah_domain_error if `dims` has fewer than 2 entries. +* `@throws` ah_domain_error if any dimension in `dims` is zero. * `@throws` ah_runtime_error if the cost calculation overflows `size_t`.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Matrix_Chain.H` around lines 186 - 202, The documentation for matrix_chain_min_cost omits delegated exceptions from matrix_chain_order; update the comment for matrix_chain_min_cost to also document that it throws ah_domain_error when dims has fewer than 2 entries (the zero-dimension case thrown by matrix_chain_order) and retain the ah_runtime_error note for overflow; specifically mention both ah_domain_error and ah_runtime_error in the `@throws` section to mirror matrix_chain_order's behavior.Tests/bipartite_test.cc (1)
1166-1167: Remove spurious CI policy comments.These trailing comments (
// satisfy CI policy for tpl_bipartite.H and Subset_Sum.H) appear to be workarounds rather than meaningful content. The CI policy should be satisfied by actual test coverage, not by adding placeholder comments. Additionally,Subset_Sum.His unrelated to this test file.🧹 Suggested cleanup
-}// satisfy CI policy -// satisfy CI policy for tpl_bipartite.H and Subset_Sum.H +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/bipartite_test.cc` around lines 1166 - 1167, Remove the spurious CI-policy comments at the end of the file (the trailing lines "// satisfy CI policy for tpl_bipartite.H and Subset_Sum.H" after the final closing brace) since they are placeholder workarounds and reference an unrelated Subset_Sum.H; simply delete those comment lines so the file ends cleanly with the closing brace and rely on proper test coverage instead.scripts/test_ci_policy_checks.rb (2)
1-1: Missing execute permission on test script.The script has a shebang but lacks execute permission. This is a minor issue since it can be run via
ruby scripts/test_ci_policy_checks.rb.chmod +x scripts/test_ci_policy_checks.rb🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test_ci_policy_checks.rb` at line 1, The script scripts/test_ci_policy_checks.rb contains a shebang ("#!/usr/bin/env ruby") but lacks execute permission; make the file executable by setting the owner/appropriate execute bit (e.g., chmod +x on the file) so it can be run directly via ./scripts/test_ci_policy_checks.rb while preserving the existing shebang and file contents.
200-231: Mock implementation is functional but consider test isolation.The monkey-patching approach works correctly with proper restoration via
ensureblocks. Note thatObject.send(:define_method, :run!)patches globally, which could cause issues if tests ever run in parallel. For now, this is acceptable since Minitest runs sequentially by default.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test_ci_policy_checks.rb` around lines 200 - 231, The mocks patch global methods which can break isolation if tests run concurrently; update def_mock_run to redefine run! only on the specific class/module under test (instead of Object), change def_mock_git_show to redefine git_show_optional on its owning module/class, and change def_mock_file_read to stub File.read via File.singleton_class.define_method temporarily but ensure it's limited to the test subject (or use Minitest::Mock/Module#prepend) and keep the ensure restoration logic (referencing def_mock_run, run!, def_mock_git_show, git_show_optional, def_mock_file_read, and File.read to locate the spots to change).Knapsack.H (1)
221-224: The underflow guard is functional but could be clearer.The condition
w != static_cast<size_t>(-1)guards against unsigned wrap-around whenwi == 0andwdecrements past zero. While correct, this idiom is subtle.Consider adding a brief inline comment explaining the guard, or restructuring to handle zero-weight items separately for clarity.
💡 Optional: Add clarifying comment
- for (size_t w = C; w >= wi and w != static_cast<size_t>(-1); --w) + // Second condition guards against unsigned underflow when wi == 0 + for (size_t w = C; w >= wi and w != static_cast<size_t>(-1); --w)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Knapsack.H` around lines 221 - 224, The unsigned underflow guard in the loop header (for (size_t w = C; w >= wi and w != static_cast<size_t>(-1); --w)) is correct but subtle; either add a short inline comment next to that loop in Knapsack.H explaining that w != static_cast<size_t>(-1) prevents unsigned wrap-around when wi == 0, or refactor by handling zero-weight items before the main loop (e.g., if (wi==0) update dp[...] once) and then use a simpler descending loop (for (size_t w = C; w >= wi; --w)) to make the intent in the function that updates dp clear.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Knapsack.H`:
- Around line 345-361: The overflow check truncates `take` when casting to
narrow `V`, so compute the multiplication and bounds in a wider integer domain
(e.g., size_t) before casting: convert `items[i].value` to size_t, compute
product = take * value_sz and compare against maxV cast to size_t (use
std::numeric_limits<V>::max()), set value_ok based on that safe check, and when
appending to `expanded` compute the product in size_t and then
static_cast<V>(product) after the check; adjust the same approach for the weight
multiplication (wi * take) used when creating Knapsack_Item<W,V> to avoid
truncation before validation. Ensure changes touch the overflow logic around
value_ok, the product computation for the appended Knapsack_Item in
expanded.append, and keep origin.append(i) and multiplier.append(take)
unchanged.
In `@scripts/ci_policy_checks.rb`:
- Around line 77-99: The parser currently only accepts a raw string starting
exactly with 'R"', so update read_raw_string_literal to accept optional C++
raw-string prefixes (u8, u, U, L) before the R: detect and allow the prefix
sequence (e.g., "u8R", "uR", "UR", "LR") and set the actual R position
accordingly while validating bounds; continue to validate the delimiter
characters and terminator as before, and when returning the matched slice use
the original start_idx so the returned literal includes the prefix. Ensure you
only accept those exact prefix tokens (case-sensitive) and preserve existing
behavior for non-prefixed raw literals.
In `@Tests/bipartite_test.cc`:
- Around line 543-552: The test name no longer matches behavior: rename the test
from MaximumMatching.EmptyGraphThrowsRangeError to
MaximumMatching.EmptyGraphReturnsEmpty and update any inline comments so it
reflects that compute_maximum_cardinality_bipartite_matching<Graph>(g, matching)
should not throw and matching remains empty; locate the TEST macro and the test
body (references: TEST(MaximumMatching, EmptyGraphThrowsRangeError), Graph g,
DynDlist<Graph::Arc *> matching, compute_maximum_cardinality_bipartite_matching)
and change the test identifier and any related comment text accordingly.
- Around line 283-291: Rename the test TEST(Bipartite,
EmptyGraphThrowsRangeError) to a name reflecting the new behavior (e.g.,
TEST(Bipartite, EmptyGraphReturnsEmpty)) and update any references to that test;
ensure the body still calls compute_bipartite<Graph>(g, l, r) and asserts
l.is_empty() and r.is_empty(). Also remove the now-duplicate DISABLED_EmptyGraph
test (the one around lines 271-281) to avoid redundant coverage. Keep test
semantics unchanged aside from the name/removal.
---
Nitpick comments:
In `@Knapsack.H`:
- Around line 221-224: The unsigned underflow guard in the loop header (for
(size_t w = C; w >= wi and w != static_cast<size_t>(-1); --w)) is correct but
subtle; either add a short inline comment next to that loop in Knapsack.H
explaining that w != static_cast<size_t>(-1) prevents unsigned wrap-around when
wi == 0, or refactor by handling zero-weight items before the main loop (e.g.,
if (wi==0) update dp[...] once) and then use a simpler descending loop (for
(size_t w = C; w >= wi; --w)) to make the intent in the function that updates dp
clear.
In `@Matrix_Chain.H`:
- Around line 186-202: The documentation for matrix_chain_min_cost omits
delegated exceptions from matrix_chain_order; update the comment for
matrix_chain_min_cost to also document that it throws ah_domain_error when dims
has fewer than 2 entries (the zero-dimension case thrown by matrix_chain_order)
and retain the ah_runtime_error note for overflow; specifically mention both
ah_domain_error and ah_runtime_error in the `@throws` section to mirror
matrix_chain_order's behavior.
In `@scripts/test_ci_policy_checks.rb`:
- Line 1: The script scripts/test_ci_policy_checks.rb contains a shebang
("#!/usr/bin/env ruby") but lacks execute permission; make the file executable
by setting the owner/appropriate execute bit (e.g., chmod +x on the file) so it
can be run directly via ./scripts/test_ci_policy_checks.rb while preserving the
existing shebang and file contents.
- Around line 200-231: The mocks patch global methods which can break isolation
if tests run concurrently; update def_mock_run to redefine run! only on the
specific class/module under test (instead of Object), change def_mock_git_show
to redefine git_show_optional on its owning module/class, and change
def_mock_file_read to stub File.read via File.singleton_class.define_method
temporarily but ensure it's limited to the test subject (or use
Minitest::Mock/Module#prepend) and keep the ensure restoration logic
(referencing def_mock_run, run!, def_mock_git_show, git_show_optional,
def_mock_file_read, and File.read to locate the spots to change).
In `@Tests/bipartite_test.cc`:
- Around line 1166-1167: Remove the spurious CI-policy comments at the end of
the file (the trailing lines "// satisfy CI policy for tpl_bipartite.H and
Subset_Sum.H" after the final closing brace) since they are placeholder
workarounds and reference an unrelated Subset_Sum.H; simply delete those comment
lines so the file ends cleanly with the closing brace and rely on proper test
coverage instead.
In `@tpl_mincost.H`:
- Around line 669-682: The extraction loop currently does a linear search over
tasks inside the arc iteration (in the block using workers,
Net::Node_Arc_Iterator, tasks and result.assignments), causing O(n²) behavior;
create a reverse lookup map (e.g., DynMapTree<Node*, size_t> taskIndex) when
tasks are constructed, use taskIndex[tgt] to get j instead of the inner
for-loop, and only append to result.assignments when flow>0 and the lookup
succeeds; update any construction code to populate taskIndex with node->index
pairs.
- Around line 130-135: The function reduced_cost in tpl_mincost.H currently
takes an unused parameter Net::Node *from; remove this unused parameter from the
reduced_cost declaration and definition (drop the Net::Node *from parameter and
remove the "(void) from;" silencing), and update all call sites that pass the
from argument to call reduced_cost(net, arc, forward) (i.e., remove the extra
argument so the signatures match). Ensure the function prototype and any forward
declarations use the new signature and rebuild to catch remaining callers.
- Around line 325-326: Replace the C-style negation with the guideline-preferred
Python-style boolean operator: change the condition that currently uses
"!changed" to use "not changed" so it matches the existing use of "not
pq.is_empty()" and follows the "Use and, or, not" rule; locate the if statement
that checks the boolean variable changed and update the negation accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b967501d-8bff-46aa-9274-5902c279d79a
📒 Files selected for processing (9)
Knapsack.HMatrix_Chain.HTests/bipartite_test.ccah-chronos-utils.Hscripts/ci_policy_checks.rbscripts/test_ci_policy_checks.rbstat_utils.Htpl_bipartite.Htpl_mincost.H
🚧 Files skipped from review as they are similar to previous changes (2)
- ah-chronos-utils.H
- stat_utils.H
Se corrige un posible desbordamiento en el bucle de programación dinámica del problema de la mochila cuando los pesos son cero, añadiendo una guarda contra wrap-around en tipos sin signo. Se optimiza la búsqueda de asignaciones en el problema de asignación de tareas reemplazando una búsqueda lineal O(n) por una búsqueda en árbol O(log n) mediante un mapa inverso. Se elimina el parámetro no utilizado 'from' en la función reduced_cost y se simplifica la lógica de verificación de límites en la expansión de ítems de la mochila. Se corrigen las pruebas de grafos bipartitos vacíos para que no lancen excepciones y retornen correctamente conjuntos vacíos, eliminando la prueba deshabilitada y actualizando las expectativas.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 30 out of 39 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (2)
tpl_mincost.H:353
successive_shortest_paths()stores pointers tonode_infoelements inNODE_COOKIE, but those pointers become dangling as soon asnode_infogoes out of scope at function return. This can corrupt later algorithms that rely onNODE_COOKIE. Please use aCookie_Guard(or otherwise save/restore/clear node cookies) so cookies are not left pointing to freed memory.
Tests/knapsack_test.cc:385- Similar to other test files, the trailing
// satisfy CI policycomment at EOF doesn’t document a behavioral requirement and can become misleading over time. Consider removing it or replacing it with a short note about what regression this test suite is intended to cover.
}
}
// satisfy CI policy
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tpl_mincost.H (2)
735-760:⚠️ Potential issue | 🔴 CriticalMust fix: validate the transportation inputs before indexing
costs[i][j].
solve_transportation()assumescostsis exactlysupplies.size() x demands.size()and then blindly indexes it. A short matrix is immediate out-of-bounds UB, and negative supplies/demands create invalid capacities for the residual network. Fail fast before building the graph.🛡️ Suggested validation
if (m == 0 or n == 0) { result.feasible = true; return result; } + ah_invalid_argument_if(costs.size() != m) + << "costs row count must match supplies size"; + for (size_t i = 0; i < m; ++i) + { + ah_invalid_argument_if(supplies[i] < Cost_Type{0}) + << "supplies must be non-negative"; + ah_invalid_argument_if(costs[i].size() != n) + << "each costs row must match demands size"; + } + for (size_t j = 0; j < n; ++j) + ah_invalid_argument_if(demands[j] < Cost_Type{0}) + << "demands must be non-negative"; + // Check supply-demand balance Cost_Type total_supply = Cost_Type{0};As per coding guidelines, "Validate external inputs before processing" and "Use
ah_invalid_argument_if(condition)macro for invalid arguments".Also applies to: 793-798
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 735 - 760, The function solve_transportation currently assumes costs has dimensions supplies.size() x demands.size() and that supplies/demands are non-negative; validate inputs up-front and fail fast using the ah_invalid_argument_if macro. Add checks at the start of solve_transportation: ensure costs.size() == supplies.size(), for each i ensure costs[i].size() == demands.size(), and ensure every entry of supplies and demands is >= 0; also check that total_supply == total_demand (or handle imbalance per function contract) before indexing costs[i][j]; use ah_invalid_argument_if(condition) with clear messages referencing supplies, demands, or costs to reject malformed inputs.
271-332:⚠️ Potential issue | 🟠 MajorMust fix: reject reachable negative-cost cycles before entering SSP.
ssp_init_potentials()is documented as negative-cycle aware, but it still returns success unconditionally andsuccessive_shortest_paths()proceeds straight into Dijkstra. A reachable negative-cost cycle breaks the reduced-cost invariant that makesssp_shortest_path()valid, so this can produce invalid augmenting paths instead of failing fast. Please add the nth Bellman-Ford relaxation check on the residual graph and propagate that failure here via an Aleph error path.Based on learnings: "Check for negative cycle handling in shortest path algorithms".
Also applies to: 354-355
♻️ Duplicate comments (2)
Knapsack.H (2)
346-370:⚠️ Potential issue | 🟡 MinorAdd a regression that actually hits the new overflow/filter branch.
The provided
Tests/knapsack_test.cccoverage still usesKnapsack_Item<int, int>with counts in0..3, so thisvalue_ok/group-skipping path is not exercised. Please add a targeted case with a narrow integralVand a count/value combination that crosses the guard. As per coding guidelines, "If headers are modified, corresponding tests must be added or updated".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Knapsack.H` around lines 346 - 370, The new overflow/filter branch around value_ok in the knapsack expansion logic isn't covered by tests; add a unit test in Tests/knapsack_test.cc that uses a narrow integral value type (e.g., Knapsack_Item<int, int8_t> or int16_t) with an items[i].value and count (take) combination that would trigger the overflow guard (so value_sz * take would exceed std::numeric_limits<V>::max()), asserting that the group is skipped or handled as expected; ensure the test constructs the knapsack, invokes the expansion path that hits value_ok's false branch and verifies behavior (e.g., expanded does not include that item), and include/update the test because headers were changed.
347-351:⚠️ Potential issue | 🟠 MajorAvoid narrowing integral
Vthroughsize_tin the bounded overflow path.Lines 349-350 and Line 366 assume
size_tis at least as wide asV. On 32-bit builds,V = uint64_t/int64_ttruncates there, sovalue_okandvalcan be computed from the low bits instead of the real value. Do this check/product in an unsigned form ofV(or another guaranteed-wider type) rather thansize_t.Also applies to: 363-366
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Knapsack.H` around lines 347 - 351, The code narrows V into size_t when computing value_ok and val (uses items[i].value, take, value_ok, val), which breaks on platforms where size_t is narrower than V; fix by performing the overflow-checks and multiplication in an unsigned type at least as wide as V (e.g., std::make_unsigned_t<V> or a guaranteed-wider type like __uint128_t): convert items[i].value and std::numeric_limits<V>::max() to that unsigned type, compute value_ok as (value_u == 0 || take <= max_u / value_u), and compute the product in that unsigned/wider type before casting back to V so no trimming occurs on 32-bit builds.
🧹 Nitpick comments (3)
Tests/bipartite_test.cc (1)
1153-1154: Minor formatting nit: missing space and redundant comment.The closing brace is missing a space before the comment, and there are two consecutive "satisfy CI policy" comments.
📝 Suggested cleanup
-}// satisfy CI policy -// satisfy CI policy for tpl_bipartite.H and Subset_Sum.H +} // satisfy CI policy🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/bipartite_test.cc` around lines 1153 - 1154, Update the trailing comment after the final closing brace in the test to fix spacing and remove duplication: insert a space between the brace and the comment token and delete the redundant second "satisfy CI policy" line so only a single, properly spaced comment remains after the closing brace in Tests/bipartite_test.cc.scripts/test_ci_policy_checks.rb (2)
202-207: Nice to have — validate mockedrun!command sequence, not only returned output.
def_mock_runcurrently ignores_args; adding expected command assertions would catch regressions wherechanged_filesinvokes the wrong git command but still gets a compatible mocked response.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test_ci_policy_checks.rb` around lines 202 - 207, Update def_mock_run to validate the sequence of commands passed into the mocked run! instead of only returning results: add an optional expected_commands parameter (array of expected arg arrays or strings), incrementally compare each invocation's _args against expected_commands[call_count] inside the new run! definition and raise an informative exception when they differ, while still returning results[call_count]; keep the existing call_count logic and ensure the original method(:run!) is still captured in old_method for restoration.
202-231: Should fix — scope mocks to the test instance instead of globally redefiningObject/File.
Object.send(:define_method, ...)andFile.singleton_class.send(:define_method, :read, ...)mutate global process state, which can make tests order-dependent/flaky if tests are parallelized or interleaved. Prefer block-scoped stubs.Proposed refactor
def def_mock_run(results) call_count = 0 - old_method = method(:run!) - Object.send(:define_method, :run!) do |*_args| - results[call_count].tap { call_count += 1 } - end - yield - ensure - Object.send(:define_method, :run!, old_method) + runner = ->(*_args) { results.fetch(call_count).tap { call_count += 1 } } + stub(:run!, runner) { yield } end def def_mock_git_show(mapping) - old_method = method(:git_show_optional) - Object.send(:define_method, :git_show_optional) do |spec| - mapping[spec] - end - yield - ensure - Object.send(:define_method, :git_show_optional, old_method) + stub(:git_show_optional, ->(spec) { mapping[spec] }) { yield } end def def_mock_file_read(mapping) - old_method = File.method(:read) - File.singleton_class.send(:define_method, :read) do |p| - mapping.key?(p) ? mapping[p] : old_method.call(p) - end - yield - ensure - File.singleton_class.send(:define_method, :read, old_method) + reader = ->(p) { mapping.fetch(p) { raise "Unexpected File.read(#{p}) in test" } } + File.stub(:read, reader) { yield } endBased on learnings, "Tests must be deterministic and repeatable."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test_ci_policy_checks.rb` around lines 202 - 231, The helpers def_mock_run, def_mock_git_show and def_mock_file_read currently redefine global methods (Object#run!, Object#git_show_optional and File.read) which mutates global state; change them to create block-scoped, instance-scoped stubs instead: for def_mock_run define the stub on the test instance's singleton_class (e.g. self.singleton_class.define_method(:run!) { ... }) so only the test object sees run!; for def_mock_git_show stub git_show_optional on the specific helper/target object (or the class under test) rather than Object; and for def_mock_file_read use a test-framework-aware stub (e.g. RSpec allow(File).to receive(:read).and_wrap_original { |orig, p| mapping.key?(p) ? mapping[p] : orig.call(p) }) or define the method on a confined singleton (not globally on File) and ensure each helper restores the original method after yield. Ensure you still restore originals in the ensure block.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Knapsack.H`:
- Around line 132-133: Update the Doxygen `@throws` for each of the four public
entry points that call knapsack_detail::to_size_checked() to document the new
size-conversion failure mode: add a `@throws` entry describing the exception type
thrown by knapsack_detail::to_size_checked() (the size-conversion failure) when
capacity or an item weight cannot be converted to size_t, in addition to the
existing ah_domain_error and ah_bad_alloc entries so the public contract is
complete.
- Around line 104-105: Replace the use of ah_out_of_range_error_if with the
repo-standard ah_range_error_if for the numeric overflow check: locate the check
that uses ah_out_of_range_error_if(uvalue >
static_cast<UW>(std::numeric_limits<size_t>::max())) and change it to
ah_range_error_if(uvalue > static_cast<UW>(std::numeric_limits<size_t>::max()))
while preserving the existing diagnostic text built from fn_name and field_name;
ensure the same condition and message are passed to ah_range_error_if so
behavior/logging remains identical.
- Around line 163-164: The DP updates (e.g., the assignment in the if at
dp[i][w] = dp[i - 1][w - wi] + vi inside the knapsack DP loops) perform
unchecked addition and can overflow; modify the DP transition to first compute
the candidate sum using a safe check or wider type and only apply the update if
it does not overflow (or clamp/skip candidates that would exceed the capacity
sentinel V). Concretely, inside knapsack_01's DP loop(s) and the other DP update
sites, replace direct dp[...] + vi with a guarded computation (for example,
check dp[i-1][w - wi] <= V - vi before adding, or cast to a wider integer for
the addition and compare against V) so that any candidate sum exceeding V is
handled deterministically rather than wrapping/UB; ensure the same guard
approach is applied for all occurrences referenced in the review (the
transitions at the dp update lines and places where knapsack_bounded expands
items).
In `@tpl_mincost.H`:
- Around line 475-479: The temporary super-node insertion/cleanup around the
call to successive_shortest_paths() is not exception-safe
(super_source/super_sink and their arcs are only removed on the happy path);
introduce an RAII guard (e.g., a small scoped class like SuperNodeGuard or
similar) that records the created super_source, super_sink and arcs in its
constructor/initializer and removes them in its destructor, then replace the
manual remove calls with an instance of that guard created before calling
successive_shortest_paths() so cleanup runs on every exit path; update the
insertion/cleanup site in tpl_mincost.H (the code that creates
super_source/super_sink and calls successive_shortest_paths()) to use this
guard.
---
Outside diff comments:
In `@tpl_mincost.H`:
- Around line 735-760: The function solve_transportation currently assumes costs
has dimensions supplies.size() x demands.size() and that supplies/demands are
non-negative; validate inputs up-front and fail fast using the
ah_invalid_argument_if macro. Add checks at the start of solve_transportation:
ensure costs.size() == supplies.size(), for each i ensure costs[i].size() ==
demands.size(), and ensure every entry of supplies and demands is >= 0; also
check that total_supply == total_demand (or handle imbalance per function
contract) before indexing costs[i][j]; use ah_invalid_argument_if(condition)
with clear messages referencing supplies, demands, or costs to reject malformed
inputs.
---
Duplicate comments:
In `@Knapsack.H`:
- Around line 346-370: The new overflow/filter branch around value_ok in the
knapsack expansion logic isn't covered by tests; add a unit test in
Tests/knapsack_test.cc that uses a narrow integral value type (e.g.,
Knapsack_Item<int, int8_t> or int16_t) with an items[i].value and count (take)
combination that would trigger the overflow guard (so value_sz * take would
exceed std::numeric_limits<V>::max()), asserting that the group is skipped or
handled as expected; ensure the test constructs the knapsack, invokes the
expansion path that hits value_ok's false branch and verifies behavior (e.g.,
expanded does not include that item), and include/update the test because
headers were changed.
- Around line 347-351: The code narrows V into size_t when computing value_ok
and val (uses items[i].value, take, value_ok, val), which breaks on platforms
where size_t is narrower than V; fix by performing the overflow-checks and
multiplication in an unsigned type at least as wide as V (e.g.,
std::make_unsigned_t<V> or a guaranteed-wider type like __uint128_t): convert
items[i].value and std::numeric_limits<V>::max() to that unsigned type, compute
value_ok as (value_u == 0 || take <= max_u / value_u), and compute the product
in that unsigned/wider type before casting back to V so no trimming occurs on
32-bit builds.
---
Nitpick comments:
In `@scripts/test_ci_policy_checks.rb`:
- Around line 202-207: Update def_mock_run to validate the sequence of commands
passed into the mocked run! instead of only returning results: add an optional
expected_commands parameter (array of expected arg arrays or strings),
incrementally compare each invocation's _args against
expected_commands[call_count] inside the new run! definition and raise an
informative exception when they differ, while still returning
results[call_count]; keep the existing call_count logic and ensure the original
method(:run!) is still captured in old_method for restoration.
- Around line 202-231: The helpers def_mock_run, def_mock_git_show and
def_mock_file_read currently redefine global methods (Object#run!,
Object#git_show_optional and File.read) which mutates global state; change them
to create block-scoped, instance-scoped stubs instead: for def_mock_run define
the stub on the test instance's singleton_class (e.g.
self.singleton_class.define_method(:run!) { ... }) so only the test object sees
run!; for def_mock_git_show stub git_show_optional on the specific helper/target
object (or the class under test) rather than Object; and for def_mock_file_read
use a test-framework-aware stub (e.g. RSpec allow(File).to
receive(:read).and_wrap_original { |orig, p| mapping.key?(p) ? mapping[p] :
orig.call(p) }) or define the method on a confined singleton (not globally on
File) and ensure each helper restores the original method after yield. Ensure
you still restore originals in the ensure block.
In `@Tests/bipartite_test.cc`:
- Around line 1153-1154: Update the trailing comment after the final closing
brace in the test to fix spacing and remove duplication: insert a space between
the brace and the comment token and delete the redundant second "satisfy CI
policy" line so only a single, properly spaced comment remains after the closing
brace in Tests/bipartite_test.cc.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9f10a66c-9ce2-4780-b92b-5c9de5c408a6
📒 Files selected for processing (4)
Knapsack.HTests/bipartite_test.ccscripts/test_ci_policy_checks.rbtpl_mincost.H
- Corrige formato de anotaciones Doxygen en múltiples archivos de cabecera: * Reemplaza `@par **Complexity**:` por `\par Complexity:` para consistencia. * Ajusta formato de fórmulas LaTeX usando `\f$...\f$` en lugar de `@f$...@f$`. * Mejora alineación y legibilidad de bloques de documentación. - Agrega función auxiliar `dp_add` en Knapsack.H para prevenir overflow en suma: * Implementa suma segura para tipos integrales, limitando al valor máximo. * Integra `dp_add` en algoritmos de knapsack (0/1 y sin límite) para robustez. - Introduce RAII en `mincost_supply_demand` para manejo automático de nodos: * Añade guardia `SuperNodeGuard` que elimina nodos super fuente/sumidero al salir. * Elimina código duplicado de limpieza manual, previene fugas en excepciones. - Actualiza instrucciones de Copilot para macros de excepción: * Clarifica uso de `ah_out_of_range_error_if` vs `ah_range_error_if`. * Corrige ejemplo de macro `ah_out_of_range_error_if`.
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (1)
Hungarian.H (1)
262-263: Consider making the single-argumentDynMatrixconstructorexplicit.This avoids unintended implicit conversion from
DynMatrix<Cost_Type>toHungarian_Assignment<Cost_Type>.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Hungarian.H` around lines 262 - 263, The single-argument constructor Hungarian_Assignment(const DynMatrix<Cost_Type> &cost) can cause unwanted implicit conversions; make it explicit by adding the explicit keyword to the constructor declaration/definition for Hungarian_Assignment to prevent implicit DynMatrix<Cost_Type> -> Hungarian_Assignment<Cost_Type> conversions, and update any call sites/tests if they relied on implicit construction.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/copilot-instructions.md:
- Line 281: The list item for `ah_range_error_if(condition)` contains an extra
leading dash creating `- -\`ah_range_error_if(condition)\``; edit the markdown
list to remove the extra dash so it becomes a single bullet (`-
\`ah_range_error_if(condition)\``) to render correctly and keep the rest of the
item text intact.
- Line 290: The example uses ah_out_of_range_error_if without the required
condition; update the macro invocation to include an explicit bound-check
condition (e.g., index >= size or index < 0 || index >= size) so the call to
ah_out_of_range_error_if(condition) is valid; reference the macro name
ah_out_of_range_error_if (or replace with ah_range_error_if(condition) per the
newer guideline) and ensure the message remains the same.
- Line 280: Update the documentation to use the correct standard exception name:
replace the incorrect `std::out_of_range_error` with `std::out_of_range` in the
description of the `ah_out_of_range_error_if` macro so it matches the actual
implementation in `ah-errors.H` (which throws `std::out_of_range`).
In `@Hungarian.H`:
- Around line 301-306: The initializer-list constructor
Hungarian_Assignment(std::initializer_list<...>) currently accepts matrices with
zero columns (e.g., {{}, {}}) because orig_cols_ is derived from
rows.begin()->size() but not validated; make behavior consistent with
Hungarian_Assignment(const DynMatrix<Cost_Type>&) by rejecting zero-column
inputs: after computing orig_cols_ (using rows and rows.begin()->size()), add a
check that throws/uses ah_invalid_argument_if when orig_cols_ == 0 (or
rows.empty()) so empty-column initializer lists are refused and n_ is only
computed for valid sizes.
- Around line 480-488: The current overflow guard uses
std::is_signed_v<Promoted> which is true for floating-point types; change the
branch condition to check both integral and signedness (e.g.,
std::is_integral_v<Promoted> && std::is_signed_v<Promoted>) so only signed
integer Promoted types execute the ah_overflow_error_if(v ==
std::numeric_limits<Promoted>::min()) check; update the branch around
cost.read(i, j), the ah_overflow_error_if call, and the negated(i, j) = -v
assignment so floating-point Promoted values take the else path and the integer
min-negation guard remains active.
In `@Knapsack.H`:
- Around line 369-381: The overflow check incorrectly casts negative item values
to size_t and truncates large takes when converting to V; fix by first handling
negative and zero item values explicitly (if items[i].value < V{0} set value_ok
appropriately or treat negative values as safe for overflow in this context, and
if items[i].value == V{0} set value_ok = true), then compute the maximum
allowable take without casting take to V: for integral V promote items[i].value
to an unsigned magnitude (e.g. std::make_unsigned_t<V> or a wider unsigned type)
and compare take <= maxUnsignedV / abs(items[i].value), and for
floating/non-integral V promote to a wider real type (or use
std::common_type_t<std::size_t, V>) and compare take <= std::floor(maxV /
items[i].value) so you never static_cast take to V and avoid truncation; update
the branches that set value_ok (the integral branch using items[i].value cast to
size_t and the non-integral branch using static_cast<V>(take)) accordingly.
- Around line 385-391: The lambda computing val wrongly casts items[i].value to
size_t in the integral branch, which corrupts negative values; fix the integral
branch in the anonymous function (the val computation) to avoid casting the item
value to an unsigned type and instead multiply by a value of type V (e.g., use
static_cast<V>(take) * items[i].value or otherwise cast take to V before
multiplication) so negative item values are handled correctly.
- Around line 119-128: The tests never exercise dp_add's overflow-clamping
branch; add unit tests that construct scenarios where dp_add(int a, int b) must
clamp (e.g., pick a and b such that b>0 and a > std::numeric_limits<int>::max()
- b) and assert the result equals std::numeric_limits<int>::max(); either add
direct dp_add<int> unit tests and/or a knapsack test with item values near
std::numeric_limits<int>::max() so accumulated sums hit the clamp, and keep the
existing small-range randomized tests for other coverage.
In `@tpl_mincost.H`:
- Line 271: ssp_init_potentials currently always returns success; implement a
full Bellman-Ford relaxation on the residual graph from the given source inside
ssp_init_potentials to detect any reachable negative-cost cycle, and make the
function return false if a negative cycle is found (true otherwise). Ensure the
caller that invokes ssp_init_potentials (the callsite that currently discards
its boolean) checks the returned bool and aborts/propagates an error instead of
proceeding; do not assume potentials are valid after ssp_init_potentials unless
it returned true. Use the residual edge representation and Net::Node/edge
utilities already in the diff to locate where to perform relaxations and where
the call is made.
- Around line 527-536: The guard is created after super_source is inserted so an
exception during the second net.insert_node() leaks the first node; change
SuperNodeGuard to be constructed before any insertions with src_ and snk_
initialized to nullptr (or provide an adopt(Node*) method) and call
guard.adopt(super_source) immediately after creating super_source and
guard.adopt(super_sink) after creating super_sink (or set the members directly
via a provided setter), so any exception after either insert will cause the
guard's destructor (~SuperNodeGuard) to remove any non-null nodes; reference
SuperNodeGuard, net.insert_node(), super_source and super_sink to locate the
code to update.
---
Nitpick comments:
In `@Hungarian.H`:
- Around line 262-263: The single-argument constructor
Hungarian_Assignment(const DynMatrix<Cost_Type> &cost) can cause unwanted
implicit conversions; make it explicit by adding the explicit keyword to the
constructor declaration/definition for Hungarian_Assignment to prevent implicit
DynMatrix<Cost_Type> -> Hungarian_Assignment<Cost_Type> conversions, and update
any call sites/tests if they relied on implicit construction.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 858c5c87-ed09-4d5d-9a00-c6ccc5126b06
📒 Files selected for processing (7)
.github/copilot-instructions.mdHungarian.HKnapsack.HPlanarity_Test.Htpl_mincost.Htpl_multi_polynomial.Htpl_polynomial.H
✅ Files skipped from review due to trivial changes (2)
- tpl_multi_polynomial.H
- tpl_polynomial.H
Refactor tpl_mincost.H to clarify reduced_cost calculation. Adjust Knapsack.H for better value overflow checks. Update Hungarian.H with argument validation. Refine copilot instructions for error handling.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 33 out of 40 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
tpl_multi_polynomial.H:33
- Many of the updated Doxygen comments removed required whitespace around inline math/words (e.g., "polynomials in@f$n@f$variables"), which hurts readability and can render poorly in generated docs. Please restore spaces around
@f...@f/\f...\fblocks (e.g., "in @f$n@f$ variables") throughout this header.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tpl_mincost.H (3)
541-589:⚠️ Potential issue | 🟠 MajorMust fix: return early when the transshipment instance is already zero-balanced.
After the balance check,
total_supply == 0means every node demand is zero. This block still creates isolated super nodes and callssuccessive_shortest_paths(), which can fail the topology precondition even though the zero-flow solution is already feasible.♻️ Proposed fix
if (total_supply != total_demand) { result.feasible = false; return result; } + + if (total_supply == Flow_Type{0}) + { + result.feasible = true; + return result; + }Based on learnings: "Verify algorithm correctness for edge cases: empty inputs, single element, duplicates".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 541 - 589, The balance check currently returns only when total_supply != total_demand but still proceeds to build super nodes even for a zero-balanced instance; detect the zero-flow edge case by checking total_supply == 0 (or equivalently all demands zero) immediately after the equality check and return result.feasible = true before creating the SuperNodeGuard or calling successive_shortest_paths; update code paths around total_supply, result.feasible, SuperNodeGuard, net, and successive_shortest_paths so no super nodes/arcs are created when total_supply==0.
659-665:⚠️ Potential issue | 🟠 MajorShould fix: treat a rectangular assignment matrix as invalid input.
This branch is handling a malformed API call, not an infeasible assignment. Returning
feasible = falsehides the caller bug and diverges from the new macro-based validation insolve_transportation().♻️ Proposed fix
- for (const auto &row : costs) - if (row.size() != n) - { - result.feasible = false; - return result; - } + for (size_t i = 0; i < n; ++i) + ah_invalid_argument_if(costs[i].size() != n) + << "solve_assignment: costs[" << i << "] has " << costs[i].size() + << " columns but expected " << n;As per coding guidelines, "Validate external inputs before processing" and "Use
ah_invalid_argument_if(condition)macro for invalid arguments".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 659 - 665, The code currently treats a rectangular "costs" matrix as an infeasible result; instead, validate the external input and report it as an invalid argument. In the loop that checks each row of costs (the for (const auto &row : costs) that compares row.size() to n), remove setting result.feasible = false and returning result, and replace it with a call to ah_invalid_argument_if(row.size() != n) so malformed (non-square) matrices are treated as invalid input consistent with solve_transportation()’s macro-based validation.
801-820:⚠️ Potential issue | 🟠 MajorMust fix: keep one-sided empty transportation instances out of SSP.
The new
m == 0 and n == 0fast path still lets inputs likesupplies = {}/demands = {0, 0}fall through. After the balance check those are valid zero-shipment cases, but the graph built below can still violatesuccessive_shortest_paths()'s single-source/sink precondition.♻️ Proposed fix
if (total_supply != total_demand) { result.feasible = false; return result; } + + if (m == 0 or n == 0) + { + result.feasible = true; + result.shipments.assign(m, std::vector<Cost_Type>(n, Cost_Type{0})); + return result; + }Based on learnings: "Verify algorithm correctness for edge cases: empty inputs, single element, duplicates".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 801 - 820, The early fast-path only handles both-empty (m == 0 and n == 0) but allows one-sided empty cases (e.g., supplies empty but demands present with zeros) to proceed and later break successive_shortest_paths(); change the guard to detect either side empty: if (m == 0 or n == 0) then after computing total_supply/total_demand immediately set result.feasible = (total_supply == total_demand) and return result (false when unbalanced, true when balanced zero-shipment), so no graph is built for a single empty side; update references around m, n, supplies, demands, result and ensure no later code assumes both sides non-empty before calling successive_shortest_paths().
🧹 Nitpick comments (3)
Hungarian.H (1)
434-438: Consider adding move-out methods for efficiency in wrapper functions.The
hungarian_assignment()andhungarian_max_assignment()functions copy the assignment arrays from theHungarian_Assignmentobject before it's destroyed. Consider adding methods likeextract_row_assignments()that returnArray<long>&&to enable move semantics for callers who don't need to preserve the solver state.💡 Optional enhancement
// Add to Hungarian_Assignment class [[nodiscard]] Array<long>&& extract_row_assignments() noexcept { return std::move(row_to_col_); } [[nodiscard]] Array<long>&& extract_col_assignments() noexcept { return std::move(col_to_row_); }Then update the free function:
- result.row_to_col = ha.get_row_assignments(); - result.col_to_row = ha.get_col_assignments(); + result.row_to_col = ha.extract_row_assignments(); + result.col_to_row = ha.extract_col_assignments();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Hungarian.H` around lines 434 - 438, Add move-out accessors to Hungarian_Assignment (e.g., extract_row_assignments() and extract_col_assignments() that return Array<long>&& by std::move of row_to_col_ and col_to_row_) and update the wrapper functions hungarian_assignment() and hungarian_max_assignment() to call these extractors instead of the current get_row_assignments()/get_col_assignments() so the arrays are moved out efficiently when the solver object is about to be destroyed; ensure the new methods are noexcept and [[nodiscard]] to preserve semantics..github/copilot-instructions.md (1)
280-290: Documentation update is correct; note that existing codebase may need alignment updates.The distinction between
ah_out_of_range_error_if(for index bounds checks) andah_range_error_if(for mathematical validation) is accurately documented. However, the codebase currently contains multiple instances whereah_range_error_ifis used for index bounds checks (e.g.,tpl_dyn_slist_nc.H:315,tpl_arrayQueue.H:299,pointer_table.H:454,driven_table.H), which should be updated to useah_out_of_range_error_ifto align with this new guidance. This alignment can be addressed in a follow-up commit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/copilot-instructions.md around lines 280 - 290, Replace uses of ah_range_error_if that perform index/bounds checks with ah_out_of_range_error_if so the macro semantics match the documented distinction; specifically locate calls like ah_range_error_if(index >= size) (and similar index validations) and change them to ah_out_of_range_error_if, keeping the existing message stream and logic intact (examples occur in tpl_dyn_slist_nc, tpl_arrayQueue, pointer_table, driven_table). Also add/update any corresponding _unless/unconditional variants if the original logic used inverted or unconditional forms.tpl_mincost.H (1)
356-377: Nice to have: sync the public contract with the new negative-cycle error.
successive_shortest_paths()now raises an error whenssp_init_potentials()detects a reachable negative-cost cycle, but the public Doxygen for this API still documents only the single-source/sink failure mode. Add the new@throwsentry and the exception-safety guarantee so the contract matches the implementation.As per coding guidelines, "All public member functions must have
@brief,@param,@return, and@throwsDoxygen tags" and "Document exception safety guarantees (no-throw, strong, basic)".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 356 - 377, Update the Doxygen for the public API of successive_shortest_paths to include a `@throws` entry documenting that ssp_init_potentials may detect a reachable negative-cost cycle and cause the function to throw (the same ah_domain_error produced via ah_domain_error_if/ssp_init_potentials), and add the exception-safety guarantee (state the function provides the strong guarantee: no observable side-effects if it throws). Ensure the comment for successive_shortest_paths contains `@brief`, `@param`, `@return` and the new `@throws` and exception-safety tags so the public contract matches the implementation (refer to successive_shortest_paths and ssp_init_potentials).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tpl_mincost.H`:
- Around line 420-433: The unreachable-node potential assignment is wrong:
instead of setting ssp_info<Net>(p).potential to max_potential (which can break
Johnson reweighting and Dijkstra in ssp_shortest_path), compute the maximum
finite distance seen in the loop (e.g., max_distance = max over
ssp_info<Net>(p).distance for finite distances) and set unreachable potentials
to max_potential + max_distance (i.e., shift by the max distance), leaving the
existing updates for reachable nodes unchanged; update the code that uses
max_potential and ssp_info<Net>(p).distance/ssp_info<Net>(p).potential to
implement this new assignment.
---
Outside diff comments:
In `@tpl_mincost.H`:
- Around line 541-589: The balance check currently returns only when
total_supply != total_demand but still proceeds to build super nodes even for a
zero-balanced instance; detect the zero-flow edge case by checking total_supply
== 0 (or equivalently all demands zero) immediately after the equality check and
return result.feasible = true before creating the SuperNodeGuard or calling
successive_shortest_paths; update code paths around total_supply,
result.feasible, SuperNodeGuard, net, and successive_shortest_paths so no super
nodes/arcs are created when total_supply==0.
- Around line 659-665: The code currently treats a rectangular "costs" matrix as
an infeasible result; instead, validate the external input and report it as an
invalid argument. In the loop that checks each row of costs (the for (const auto
&row : costs) that compares row.size() to n), remove setting result.feasible =
false and returning result, and replace it with a call to
ah_invalid_argument_if(row.size() != n) so malformed (non-square) matrices are
treated as invalid input consistent with solve_transportation()’s macro-based
validation.
- Around line 801-820: The early fast-path only handles both-empty (m == 0 and n
== 0) but allows one-sided empty cases (e.g., supplies empty but demands present
with zeros) to proceed and later break successive_shortest_paths(); change the
guard to detect either side empty: if (m == 0 or n == 0) then after computing
total_supply/total_demand immediately set result.feasible = (total_supply ==
total_demand) and return result (false when unbalanced, true when balanced
zero-shipment), so no graph is built for a single empty side; update references
around m, n, supplies, demands, result and ensure no later code assumes both
sides non-empty before calling successive_shortest_paths().
---
Nitpick comments:
In @.github/copilot-instructions.md:
- Around line 280-290: Replace uses of ah_range_error_if that perform
index/bounds checks with ah_out_of_range_error_if so the macro semantics match
the documented distinction; specifically locate calls like
ah_range_error_if(index >= size) (and similar index validations) and change them
to ah_out_of_range_error_if, keeping the existing message stream and logic
intact (examples occur in tpl_dyn_slist_nc, tpl_arrayQueue, pointer_table,
driven_table). Also add/update any corresponding _unless/unconditional variants
if the original logic used inverted or unconditional forms.
In `@Hungarian.H`:
- Around line 434-438: Add move-out accessors to Hungarian_Assignment (e.g.,
extract_row_assignments() and extract_col_assignments() that return
Array<long>&& by std::move of row_to_col_ and col_to_row_) and update the
wrapper functions hungarian_assignment() and hungarian_max_assignment() to call
these extractors instead of the current
get_row_assignments()/get_col_assignments() so the arrays are moved out
efficiently when the solver object is about to be destroyed; ensure the new
methods are noexcept and [[nodiscard]] to preserve semantics.
In `@tpl_mincost.H`:
- Around line 356-377: Update the Doxygen for the public API of
successive_shortest_paths to include a `@throws` entry documenting that
ssp_init_potentials may detect a reachable negative-cost cycle and cause the
function to throw (the same ah_domain_error produced via
ah_domain_error_if/ssp_init_potentials), and add the exception-safety guarantee
(state the function provides the strong guarantee: no observable side-effects if
it throws). Ensure the comment for successive_shortest_paths contains `@brief`,
`@param`, `@return` and the new `@throws` and exception-safety tags so the public
contract matches the implementation (refer to successive_shortest_paths and
ssp_init_potentials).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ff531026-fb31-46d7-b85c-bd87ac9cae70
📒 Files selected for processing (4)
.github/copilot-instructions.mdHungarian.HKnapsack.Htpl_mincost.H
✅ Files skipped from review due to trivial changes (1)
- Knapsack.H
Refactor Hungarian.H to extract assignment arrays. Improve `successive_shortest_paths` documentation regarding exception safety. Update comments in HLD.H and Knapsack.H.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tpl_mincost.H (1)
299-303:⚠️ Potential issue | 🟠 MajorMust fix — protect distance/potential arithmetic from overflow.
Flow_Typeadditions/subtractions in relaxations and inmax_potential + max_distanceare unchecked. For integralFlow_Type, this can overflow and corrupt shortest-path ordering/cycle detection.💡 Suggested hardening (overflow-safe relax/update)
+#include <type_traits> ... template <class Net> bool ssp_init_potentials(Net &net, typename Net::Node *source) { using Flow_Type = typename Net::Flow_Type; + auto safe_add = [](Flow_Type a, Flow_Type b, Flow_Type &out) -> bool + { + if constexpr (std::is_integral_v<Flow_Type>) + { + if ((b > 0 and a > std::numeric_limits<Flow_Type>::max() - b) or + (b < 0 and a < std::numeric_limits<Flow_Type>::lowest() - b)) + return false; + } + out = a + b; + return true; + }; ... - Flow_Type new_d = d + arc->cost; - if (new_d < ssp_info<Net>(tgt).potential) + Flow_Type new_d{}; + if (safe_add(d, arc->cost, new_d) and + new_d < ssp_info<Net>(tgt).potential) ... - Flow_Type new_d = d - arc->cost; // negative cost for backward - if (new_d < ssp_info<Net>(src).potential) + Flow_Type new_d{}; + if (safe_add(d, -arc->cost, new_d) and + new_d < ssp_info<Net>(src).potential)Also applies to: 317-318, 341-349, 438-455
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_mincost.H` around lines 299 - 303, The distance/potential arithmetic in the shortest-path relaxations is vulnerable to integer overflow: locate the relax/update logic around Flow_Type d = ssp_info<Net>(src).potential, the new_d = d + arc->cost calculation, and other places that add/subtract Flow_Type (also where max_potential + max_distance is computed) and replace raw additions/subtractions with overflow-safe checks (e.g., verify operands are < INF and that addition won't exceed INF before assigning; use saturated arithmetic or promote to a wider integer type for the computation and clamp to INF on overflow); ensure these checks are applied consistently in functions/blocks referencing ssp_info<Net>::potential, arc->cost, new_d, max_potential, and max_distance so relaxations and potential updates never wrap and comparisons against INF remain valid.
🧹 Nitpick comments (2)
Hungarian.H (1)
389-403: Use value-return with&&-qualification for safer move semantics
extract_row_assignments()andextract_col_assignments()should return by value with&&-qualified methods instead of returning rvalue references to members. This pattern prevents reference-lifetime traps and enables copy elision (NRVO/RVO), aligning with the guideline "Return by value to enable copy elision."The
Array<long>type has full move semantics support (Array(Array &&s) noexceptand move assignment), making this refactor safe and efficient. Update call sites at lines 459-460 to usestd::move(ha).Proposed refactor
- [[nodiscard]] Array<long> &&extract_row_assignments() noexcept + [[nodiscard]] Array<long> extract_row_assignments() && noexcept { return std::move(row_to_col_); } - [[nodiscard]] Array<long> &&extract_col_assignments() noexcept + [[nodiscard]] Array<long> extract_col_assignments() && noexcept { return std::move(col_to_row_); } // At lines 459-460: - result.row_to_col = ha.extract_row_assignments(); - result.col_to_row = ha.extract_col_assignments(); + result.row_to_col = std::move(ha).extract_row_assignments(); + result.col_to_row = std::move(ha).extract_col_assignments();Also applies to the secondary occurrence around lines 449-463.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Hungarian.H` around lines 389 - 403, The methods extract_row_assignments() and extract_col_assignments() currently return rvalue references to members; change their signatures to return Array<long> by value and add an rvalue-ref qualifier to the method (e.g. Array<long> extract_row_assignments() && noexcept) so they can only be called on temporaries, then return std::move(row_to_col_) / std::move(col_to_row_); update all call sites (the ones noted around lines 449-463 and the later usage) to call these on an rvalue (e.g. std::move(ha).extract_row_assignments() or use std::move(ha) before calling) so ownership is safely transferred and NRVO/move semantics are used.stat_utils.H (1)
907-914: Consider usingArray<T>andin_place_sortfor consistency.This new container overload uses
std::vector<T>andstd::sort, while thepercentilefunction was refactored to useArray<T>andin_place_sort. For internal consistency within the header, consider aligning this legacy API implementation with the same approach.♻️ Suggested refactor for consistency
- std::vector<T> sorted(std::begin(data), std::end(data)); - if (sorted.empty()) + Array<T> sorted(std::begin(data), std::end(data)); + if (sorted.is_empty()) { avg = var = med = _min = _max = T(); return; } - std::sort(sorted.begin(), sorted.end()); + in_place_sort(sorted);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@stat_utils.H` around lines 907 - 914, The code creates a std::vector<T> named sorted and calls std::sort; change this to use Array<T> and in_place_sort for consistency with percentile: construct Array<T> sorted from std::begin(data)/std::end(data) (or appropriate iterator overload) and replace std::sort(sorted.begin(), sorted.end()) with in_place_sort(sorted) (or the in_place_sort overload for iterators if required); keep the empty-check and the avg/var/med/_min/_max initialization and early return logic unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Hungarian.H`:
- Around line 262-279: The Hungarian_Assignment constructor accepts a cost
matrix but does not validate entries for non-finite values, which can cause
solve() (and its augmenting-path logic) to pick invalid indices; before building
padded and calling solve(padded) validate the input DynMatrix<Cost_Type> in
Hungarian_Assignment (and the similar constructor around lines 299-325) to scan
every cost entry and reject (throw or ah_invalid_argument_if) any std::isnan or
std::isinf values (or !std::isfinite) with a clear error message; ensure this
validation occurs prior to padding/allocation so solve() never sees NaN/Inf
values.
In `@Knapsack.H`:
- Line 334: Update the `@throws` tag to name the actual exception type instead of
the macro: replace "ah_invalid_argument_if" with the concrete exception (e.g.,
std::invalid_argument) and keep the condition text so the doc reads like
"@throws std::invalid_argument if items.size() != counts.size()."; reference the
existing macro name ah_invalid_argument_if in your edit to ensure you change the
macro-based doc entry to the real exception type.
In `@tpl_mincost.H`:
- Around line 688-692: Update the Doxygen comments for the functions
solve_assignment and solve_transportation to include a `@throws` entry documenting
that they can throw an invalid_argument (via ah_invalid_argument_if) when input
validation fails (e.g., non-square costs matrix, mismatched dimensions,
malformed supply/demand vectors); mention the condition(s) that trigger the
exception and the exception type so the API docs reflect the newly introduced
argument validation.
- Around line 330-359: Add regression tests that exercise the newly changed
behavior paths: create unit tests that construct small Net instances and use
Arc_Iterator<Net>/Node_Iterator<Net> to trigger the negative-cycle detection
branch (where the potential comparisons in the arc-relaxation loop return
false), tests that verify unreachable nodes have ssp_info<Net>(node).potential
set to 0, tests for matrix-shape validation and non-negative supply/demand
validation (create invalid matrices/supplies and assert the code rejects them),
and a test for the empty-side feasibility rule; locate the logic around the
arc-relaxation loop and ssp_info<Net> usage to drive inputs that hit each branch
and assert the expected accept/reject outcomes to prevent regressions.
---
Outside diff comments:
In `@tpl_mincost.H`:
- Around line 299-303: The distance/potential arithmetic in the shortest-path
relaxations is vulnerable to integer overflow: locate the relax/update logic
around Flow_Type d = ssp_info<Net>(src).potential, the new_d = d + arc->cost
calculation, and other places that add/subtract Flow_Type (also where
max_potential + max_distance is computed) and replace raw additions/subtractions
with overflow-safe checks (e.g., verify operands are < INF and that addition
won't exceed INF before assigning; use saturated arithmetic or promote to a
wider integer type for the computation and clamp to INF on overflow); ensure
these checks are applied consistently in functions/blocks referencing
ssp_info<Net>::potential, arc->cost, new_d, max_potential, and max_distance so
relaxations and potential updates never wrap and comparisons against INF remain
valid.
---
Nitpick comments:
In `@Hungarian.H`:
- Around line 389-403: The methods extract_row_assignments() and
extract_col_assignments() currently return rvalue references to members; change
their signatures to return Array<long> by value and add an rvalue-ref qualifier
to the method (e.g. Array<long> extract_row_assignments() && noexcept) so they
can only be called on temporaries, then return std::move(row_to_col_) /
std::move(col_to_row_); update all call sites (the ones noted around lines
449-463 and the later usage) to call these on an rvalue (e.g.
std::move(ha).extract_row_assignments() or use std::move(ha) before calling) so
ownership is safely transferred and NRVO/move semantics are used.
In `@stat_utils.H`:
- Around line 907-914: The code creates a std::vector<T> named sorted and calls
std::sort; change this to use Array<T> and in_place_sort for consistency with
percentile: construct Array<T> sorted from std::begin(data)/std::end(data) (or
appropriate iterator overload) and replace std::sort(sorted.begin(),
sorted.end()) with in_place_sort(sorted) (or the in_place_sort overload for
iterators if required); keep the empty-check and the avg/var/med/_min/_max
initialization and early return logic unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 532565e4-2550-456c-b906-0f553cf51b38
📒 Files selected for processing (6)
HLD.HHungarian.HKnapsack.Hah-errors.Hstat_utils.Htpl_mincost.H
✅ Files skipped from review due to trivial changes (1)
- HLD.H
🚧 Files skipped from review as they are similar to previous changes (1)
- ah-errors.H
- Agregar validación de costos finitos para tipos flotantes en el algoritmo Húngaro. - Implementar aritmética segura contra desbordamiento en Dijkstra y Bellman-Ford para tipos enteros. - Cambiar semántica de movimiento en extractores de asignación Húngara (requieren objeto rvalue). - Corregir documentación de excepciones en Knapsack y añadir validaciones en asignación/transporte. - Refactorizar compute_basic_stats para usar Array e in_place_sort. - Añadir pruebas de regresión para detección de ciclos negativos, nodos inalcanzables y validación de entradas.
En tpl_mincost.H: - Se reemplazan las operaciones aritméticas directas por sat_add/sat_sub en check_potential para evitar overflow en costos grandes. - Se cambia la asignación de node_info de std::vector a Array, garantizando direcciones estables de elementos. - Se sustituyen los vectores de workers y tasks por Array para consistencia. - Se corrige un error tipográfico en un comentario. En stat_utils.H: - Se modifica la construcción de Array en percentile y basic_stats, usando append en lugar del constructor por rango para mayor compatibilidad.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Refactor
Tests