Add Implied bound cuts. Find more knapsack cuts. Lift knapsack cuts#970
Conversation
|
/ok to test daad714 |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThread probing-implied-bound data through branch-and-bound and cut generation; added implied-bound cut generation and duplicate-cut detection; refactored rational/GCD helpers; adjusted LP concurrent-halt and MIP mode handling; updated B&B gap/termination control and propagated probing structures into solver entry points and heuristics. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cpp/src/branch_and_bound/branch_and_bound.cpp (1)
2223-2227:⚠️ Potential issue | 🟠 MajorGuard cut-pool diagnostics behind debug logging.
Line 2223 forces this block on in all builds (
#if 1), so every cut pass prints unconditionally. In this hot path, that can materially increase runtime/log volume, andprintfon Line 2226 bypasses logger controls.🔧 Suggested fix
-#if 1 +#ifdef CUOPT_LOG_DEBUG cut_pool.print_cutpool_types(); print_cut_types("In LP ", cut_types, settings_); - printf("Cut pool size: %d\n", cut_pool.pool_size()); + settings_.log.debug("Cut pool size: %d\n", cut_pool.pool_size()); `#endif`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 2223 - 2227, This diagnostic block is unconditionally enabled via "#if 1" and uses printf, so wrap the cut_pool.print_cutpool_types(), print_cut_types("In LP ", cut_types, settings_), and the pool size output behind the project's debug/logging guard (e.g., check a debug flag or Logger::is_debug_enabled()) instead of "#if 1"; also replace the printf call with the project's logger (e.g., processLogger.debug/info) so it respects log levels and formatting; specifically, remove the "#if 1" block, add a conditional like "if (debug_enabled)" around calls to cut_pool.print_cutpool_types(), print_cut_types(...), and the pool-size output, and use the logger API rather than printf when emitting the pool size.cpp/src/cuts/cuts.cpp (1)
725-744:⚠️ Potential issue | 🟠 MajorRemove unconditional knapsack logging from the hot path.
Line 725 (
if (1)) and Line 741 (#if 1) force verbose logging for every qualifying row, which can significantly slow separation and flood logs on large MIPs.♻️ Proposed fix
- if (1) { + if (verbose) { settings.log.printf( "Knapsack constraint %d row len %d beta %e sum_neg %e sum_pos %e sum_pos / (row_len - 1) %e\n", i, row_len, beta, sum_neg, sum_pos, sum_pos / (row_len - 1)); } @@ -#if 1 - i_t num_knapsack_constraints = knapsack_constraints_.size(); - settings.log.printf("Number of knapsack constraints %d\n", num_knapsack_constraints); -#endif + if (verbose) { + i_t num_knapsack_constraints = knapsack_constraints_.size(); + settings.log.printf("Number of knapsack constraints %d\n", num_knapsack_constraints); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 725 - 744, The code currently forces verbose logging in the knapsack detection hot path via the unconditional if (1) that calls settings.log.printf and the `#if` 1 block printing num_knapsack_constraints; remove these unconditional logs and instead wrap both the per-row printf and the final count printf in a runtime verbosity or debug flag check (e.g., settings.verbosity() > N or settings.log.isDebugEnabled()) so logging only occurs when enabled, and avoid computing expensive expressions (like sum_pos / (row_len - 1)) unless the log will be emitted; update the blocks around knapsack_constraints_, the per-row settings.log.printf call, and the num_knapsack_constraints print to use the conditional verbosity guard.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 2223-2227: This diagnostic block is unconditionally enabled via
"#if 1" and uses printf, so wrap the cut_pool.print_cutpool_types(),
print_cut_types("In LP ", cut_types, settings_), and the pool size output
behind the project's debug/logging guard (e.g., check a debug flag or
Logger::is_debug_enabled()) instead of "#if 1"; also replace the printf call
with the project's logger (e.g., processLogger.debug/info) so it respects log
levels and formatting; specifically, remove the "#if 1" block, add a conditional
like "if (debug_enabled)" around calls to cut_pool.print_cutpool_types(),
print_cut_types(...), and the pool-size output, and use the logger API rather
than printf when emitting the pool size.
In `@cpp/src/cuts/cuts.cpp`:
- Around line 725-744: The code currently forces verbose logging in the knapsack
detection hot path via the unconditional if (1) that calls settings.log.printf
and the `#if` 1 block printing num_knapsack_constraints; remove these
unconditional logs and instead wrap both the per-row printf and the final count
printf in a runtime verbosity or debug flag check (e.g., settings.verbosity() >
N or settings.log.isDebugEnabled()) so logging only occurs when enabled, and
avoid computing expensive expressions (like sum_pos / (row_len - 1)) unless the
log will be emitted; update the blocks around knapsack_constraints_, the per-row
settings.log.printf call, and the num_knapsack_constraints print to use the
conditional verbosity guard.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b3fc359d-9dfd-45be-9b16-50160a958299
📒 Files selected for processing (3)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/cuts/cuts.cppcpp/src/cuts/cuts.hpp
…nstraints. Fix variables to reduce the size of seperation problem
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/cuts/cuts.cpp (1)
2058-2112:⚠️ Potential issue | 🟠 MajorGuard the global scaling step in
rational_coefficients().Line 696 now calls this helper on every row. When a row has no integer terms,
gcd()andlcm()both return0, soscalarbecomes NaN and the whole inequality gets scaled by NaN. On the other side, rows with many distinct denominators can overflow the accumulated scale before the laterabs(scalar) > 1000guard ever runs. Please short-circuit the no-integer case and cap the denominator accumulation before applyingscale(). As per coding guidelines "Check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 2058 - 2112, The scaling step in rational_coefficients() must be guarded: if no integer terms were found (indices is empty) return true immediately to avoid gcd/lcm on empty vectors; compute gcd/ lcm only when denominators non-empty. Harden lcm() (and gcd()) to detect overflow during accumulation and clamp the result to a safe maximum (so huge denominators cannot blow up the scalar), and ensure rational_coefficients checks the computed scalar is finite, non-zero and within the allowed bounds (use std::isfinite or equivalent and the existing abs(scalar) > 1000 guard) before calling rational_inequality.scale(scalar). Update function names referenced: rational_coefficients, gcd, lcm, and the vectors numerators/denominators/indices accordingly.
🧹 Nitpick comments (1)
cpp/src/cuts/cuts.cpp (1)
726-745: Please re-hide the knapsack discovery prints.
knapsack_generation_tis constructed unconditionally fromcut_generation_t, soif (1)/#if 1make every cut setup emit per-row diagnostics and the final count even when knapsack cuts are disabled. On large models this will add avoidable overhead in a hot path and flood logs. Prefersettings.log.debug(...)or a dedicated verbose flag.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 726 - 745, The unconditional diagnostic prints around knapsack detection should be hidden: replace the "if (1) { settings.log.printf(...)" block with a debug-level log call (e.g. settings.log.debug(...) or whatever debug API exists) so the per-row message is emitted only when verbose/debug logging is enabled, and change the final "#if 1 ... settings.log.printf("Number of knapsack constraints %d\n", num_knapsack_constraints);" to a debug log as well (using knapsack_constraints_.size() to compute num_knapsack_constraints). Ensure you modify the printf calls around the knapsack_detection block and the final count print to use the debug logging path (or guard them with the existing verbose flag) so diagnostics are no longer emitted unconditionally.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 845-882: The loop can fix all items leaving values.empty(), but
the code still calls solve_knapsack_problem which will crash on empty input;
before calling solve_knapsack_problem check if values.empty(): if so and
seperation_rhs > 0 then set objective = 0 (the reduced knapsack optimum is zero
because any cover must come from fixed_values==1), skip calling
solve_knapsack_problem, then proceed with the existing
isNaN/restore_complemented handling; ensure you reference
solve_knapsack_problem, values, seperation_rhs, fixed_values and
restore_complemented when making the change.
In `@cpp/src/cuts/cuts.hpp`:
- Around line 169-178: The header declares two templates (rational_approximation
and rational_coefficients) but their definitions live only in cuts.cpp and are
not explicitly instantiated, causing unresolved templates for other TUs; either
move the full template definitions of rational_approximation(f_t) and
rational_coefficients<i_t,f_t> into this header (so users including cuts.hpp can
instantiate them), or remove these public declarations and keep the definitions
private in cuts.cpp and/or add explicit template instantiations in cuts.cpp for
every i_t/f_t combination you need; refer to the symbols rational_approximation,
rational_coefficients, variable_type_t and inequality_t in your change to locate
the declarations to update.
---
Outside diff comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 2058-2112: The scaling step in rational_coefficients() must be
guarded: if no integer terms were found (indices is empty) return true
immediately to avoid gcd/lcm on empty vectors; compute gcd/ lcm only when
denominators non-empty. Harden lcm() (and gcd()) to detect overflow during
accumulation and clamp the result to a safe maximum (so huge denominators cannot
blow up the scalar), and ensure rational_coefficients checks the computed scalar
is finite, non-zero and within the allowed bounds (use std::isfinite or
equivalent and the existing abs(scalar) > 1000 guard) before calling
rational_inequality.scale(scalar). Update function names referenced:
rational_coefficients, gcd, lcm, and the vectors numerators/denominators/indices
accordingly.
---
Nitpick comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 726-745: The unconditional diagnostic prints around knapsack
detection should be hidden: replace the "if (1) { settings.log.printf(...)"
block with a debug-level log call (e.g. settings.log.debug(...) or whatever
debug API exists) so the per-row message is emitted only when verbose/debug
logging is enabled, and change the final "#if 1 ... settings.log.printf("Number
of knapsack constraints %d\n", num_knapsack_constraints);" to a debug log as
well (using knapsack_constraints_.size() to compute num_knapsack_constraints).
Ensure you modify the printf calls around the knapsack_detection block and the
final count print to use the debug logging path (or guard them with the existing
verbose flag) so diagnostics are no longer emitted unconditionally.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3015e873-2776-4e76-95da-cda0669f8ff9
📒 Files selected for processing (2)
cpp/src/cuts/cuts.cppcpp/src/cuts/cuts.hpp
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cpp/src/cuts/cuts.cpp (1)
2409-2435:⚠️ Potential issue | 🟠 MajorAvoid scaling the whole inequality with a non-finite factor.
If an inequality has no integer coefficients to rationalize—or all approximated integer coefficients are
0—thengcd(...)/lcm(...)return0and Line 2430 computesscalarasNaN/Inf. The function then falls through and scales the entire inequality with that value.Possible fix
for (i_t k = 0; k < input_inequality.size(); k++) { const i_t j = rational_inequality.index(k); const f_t x = rational_inequality.coeff(k); if (var_types[j] == variable_type_t::INTEGER) { int64_t numerator, denominator; if (!rational_approximation(x, static_cast<int64_t>(1000), numerator, denominator)) { return false; } numerators.push_back(numerator); denominators.push_back(denominator); indices.push_back(k); rational_inequality.vector.x[k] = static_cast<f_t>(numerator) / static_cast<f_t>(denominator); } } + + if (numerators.empty()) { return true; } int64_t gcd_numerators = gcd(numerators); int64_t lcm_denominators = lcm(denominators); + if (gcd_numerators == 0 || lcm_denominators == 0) { return true; } f_t scalar = static_cast<f_t>(lcm_denominators) / static_cast<f_t>(gcd_numerators); - if (scalar < 0) { return false; } + if (!std::isfinite(scalar) || scalar <= 0) { return false; } if (std::abs(scalar) > 1000) { return false; }As per coding guidelines "Check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 2409 - 2435, The current code computes scalar = lcm(denominators)/gcd(numerators) without guarding zero/empty cases; if numerators or denominators are empty or gcd_numerators==0 or lcm_denominators==0 the scalar becomes non-finite and you end up scaling rational_inequality with NaN/Inf. Fix by checking after collecting numerators/denominators: if numerators.empty() or denominators.empty() or gcd_numerators == 0 or lcm_denominators == 0 then return false (or skip scaling), and also verify std::isfinite(scalar) before calling rational_inequality.scale(scalar); keep the existing magnitude guard (std::abs(scalar) > 1000) as well.cpp/src/branch_and_bound/branch_and_bound.cpp (1)
2217-2221:⚠️ Potential issue | 🟠 MajorAvoid unconditional cut-pool printing in the cut loop.
Line 2217 forces verbose cut-pool output in every build (
#if 1), which can materially slow large runs and pollute stdout. Keep this behind a debug guard and route output through the logger.💡 Proposed fix
-#if 1 +#ifdef PRINT_CUT_POOL_TYPES cut_pool.print_cutpool_types(); print_cut_types("In LP ", cut_types, settings_); - printf("Cut pool size: %d\n", cut_pool.pool_size()); + settings_.log.printf("Cut pool size: %d\n", cut_pool.pool_size()); `#endif`As per coding guidelines: "Assess algorithmic complexity for large-scale problems (millions of variables/constraints); ensure O(n log n) or better complexity, not O(n²) or worse".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 2217 - 2221, Remove the unconditional debug prints guarded by "#if 1" around cut_pool.print_cutpool_types(), print_cut_types("In LP ", cut_types, settings_), and printf("Cut pool size: %d\n", cut_pool.pool_size()); instead wrap them in a debug/verbose guard (e.g., if (settings_.verbose || ENABLE_DEBUG_LOGGING) or an `#ifdef` DEBUG_CUT_POOL) and route messages through the project's logging facility (replace printf with logger->info/debug/error as appropriate) so cut-pool output only appears when verbose/debug is enabled and does not run in production builds.
♻️ Duplicate comments (1)
cpp/src/cuts/cuts.cpp (1)
872-882:⚠️ Potential issue | 🔴 CriticalHandle the all-fixed separation case before calling
solve_knapsack_problem.After the fixing pass,
valuescan be empty whileseperation_rhsis still positive. Line 881 still callssolve_knapsack_problem, and that routine immediately hitsstd::max_element(values.begin(), values.end())on an empty range. Treat the reduced knapsack optimum as0and continue with the cover induced byfixed_values.Possible fix
if (verbose) { settings.log.printf("Calling solve_knapsack_problem\n"); } - f_t objective = solve_knapsack_problem(values, weights, seperation_rhs, solution); + f_t objective = 0.0; + if (!values.empty()) { + objective = solve_knapsack_problem(values, weights, seperation_rhs, solution); + } else { + solution.clear(); + } if (std::isnan(objective)) { restore_complemented(complemented_variables); return -1; }As per coding guidelines "Validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 872 - 882, The code currently calls solve_knapsack_problem even when values is empty (after fixing) which causes a crash; before calling solve_knapsack_problem check if values.empty() and if so set objective = 0.0 and ensure solution is either left as zeros (solution.resize(values.size()) already does this) and do NOT call solve_knapsack_problem; then proceed to build the cover using fixed_values and complemented_variables as normal (still call restore_complemented(complemented_variables) on the existing early-return paths but not here). Modify the block around seperation_rhs/solve_knapsack_problem to handle values.empty() specially using the function/variable names values, seperation_rhs, solution, solve_knapsack_problem, fixed_values, complemented_variables, restore_complemented.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 946-951: The RHS shift when undoing complementation is wrong:
after calling lifted_cut.negate() the current coefficient at
lifted_cut.vector.x[k] equals -alpha_j, so you must shift the RHS by the
original coefficient alpha_j (rhs -= alpha_j). Fix the loop that handles
complemented variables (where you now do lifted_cut.vector.x[k] *= -1.0;
lifted_cut.rhs += 1.0;) to compute the original coefficient (e.g., old_coeff =
-lifted_cut.vector.x[k]) and apply lifted_cut.rhs -= old_coeff (or equivalently
lifted_cut.rhs += lifted_cut.vector.x[k] if using the post-negation value),
rather than adding 1.0; update the code around lifted_cut.vector.x[k] and
lifted_cut.rhs accordingly.
---
Outside diff comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 2217-2221: Remove the unconditional debug prints guarded by "#if
1" around cut_pool.print_cutpool_types(), print_cut_types("In LP ",
cut_types, settings_), and printf("Cut pool size: %d\n", cut_pool.pool_size());
instead wrap them in a debug/verbose guard (e.g., if (settings_.verbose ||
ENABLE_DEBUG_LOGGING) or an `#ifdef` DEBUG_CUT_POOL) and route messages through
the project's logging facility (replace printf with logger->info/debug/error as
appropriate) so cut-pool output only appears when verbose/debug is enabled and
does not run in production builds.
In `@cpp/src/cuts/cuts.cpp`:
- Around line 2409-2435: The current code computes scalar =
lcm(denominators)/gcd(numerators) without guarding zero/empty cases; if
numerators or denominators are empty or gcd_numerators==0 or lcm_denominators==0
the scalar becomes non-finite and you end up scaling rational_inequality with
NaN/Inf. Fix by checking after collecting numerators/denominators: if
numerators.empty() or denominators.empty() or gcd_numerators == 0 or
lcm_denominators == 0 then return false (or skip scaling), and also verify
std::isfinite(scalar) before calling rational_inequality.scale(scalar); keep the
existing magnitude guard (std::abs(scalar) > 1000) as well.
---
Duplicate comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 872-882: The code currently calls solve_knapsack_problem even when
values is empty (after fixing) which causes a crash; before calling
solve_knapsack_problem check if values.empty() and if so set objective = 0.0 and
ensure solution is either left as zeros (solution.resize(values.size()) already
does this) and do NOT call solve_knapsack_problem; then proceed to build the
cover using fixed_values and complemented_variables as normal (still call
restore_complemented(complemented_variables) on the existing early-return paths
but not here). Modify the block around seperation_rhs/solve_knapsack_problem to
handle values.empty() specially using the function/variable names values,
seperation_rhs, solution, solve_knapsack_problem, fixed_values,
complemented_variables, restore_complemented.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 42a18b50-7e3c-4db1-8653-113668d14d65
📒 Files selected for processing (3)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/cuts/cuts.cppcpp/src/cuts/cuts.hpp
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/cuts/cuts.cpp (1)
2430-2436:⚠️ Potential issue | 🟡 MinorPotential division by zero if no integer variables exist.
If the inequality contains no integer variables,
numeratorswill be empty,gcd(numerators)returns 0, and line 2433 will perform division by zero. While knapsack constraints should always have integer variables, adding a guard would make this function more robust.Proposed fix
int64_t gcd_numerators = gcd(numerators); int64_t lcm_denominators = lcm(denominators); + if (gcd_numerators == 0 || lcm_denominators == 0) { return false; } + f_t scalar = static_cast<f_t>(lcm_denominators) / static_cast<f_t>(gcd_numerators); if (scalar < 0) { return false; } if (std::abs(scalar) > 1000) { return false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 2430 - 2436, The code computes scalar using gcd_numerators = gcd(numerators) and then divides by it, which can be zero when numerators is empty; guard this by checking numerators.empty() or gcd_numerators == 0 (from the variables numerators and gcd_numerators) before computing scalar (the static_cast division) and return false (or handle appropriately) to avoid division by zero in the function containing this logic in cuts.cpp.
🧹 Nitpick comments (6)
cpp/src/cuts/cuts.cpp (5)
1018-1019: Typo in method name:minimial_cover_and_partitionshould beminimal_cover_and_partition.Consistent with the header file, this method name has a spelling error.
Proposed fix
template <typename i_t, typename f_t> -void knapsack_generation_t<i_t, f_t>::minimial_cover_and_partition( +void knapsack_generation_t<i_t, f_t>::minimal_cover_and_partition( const inequality_t<i_t, f_t>& knapsack_inequality,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 1018 - 1019, The implementation method name knapsack_generation_t<i_t, f_t>::minimial_cover_and_partition is misspelled; rename the definition to knapsack_generation_t<i_t, f_t>::minimal_cover_and_partition to match the header and any declarations. Update any internal references, declarations, or calls (including template instantiations or overrides) that use the old name to the corrected minimal_cover_and_partition so signatures and linkage remain consistent.
1480-1481: Debug logging left unconditionally enabled.The
if (1 || ...)pattern always evaluates to true, causing knapsack cut generation time to always be printed. This appears to be debug code that should be reverted.Proposed fix
f_t cut_generation_time = toc(cut_start_time); - if (1 || cut_generation_time > 1.0) { - settings.log.printf("Knapsack cut generation time %.2f seconds\n", cut_generation_time); + if (cut_generation_time > 1.0) { + settings.log.debug("Knapsack cut generation time %.2f seconds\n", cut_generation_time); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 1480 - 1481, The logging for knapsack cut generation is left always-on due to the hardcoded "1 ||" in the condition; update the if-statement (around the block that calls settings.log.printf and uses cut_generation_time) to remove the "1 ||" debug override so it only logs when the intended condition (e.g., cut_generation_time > 1.0) or an appropriate verbose/debug flag is true; ensure you modify the if condition that surrounds settings.log.printf rather than changing the printf call itself.
760-763: Debug logging left unconditionally enabled.The
#if 1directive causes knapsack constraint information to always be printed. This appears to be debug code that should be conditional or removed before merging.Proposed fix
-#if 1 +#ifdef PRINT_KNAPSACK_INFO i_t num_knapsack_constraints = knapsack_constraints_.size(); settings.log.printf("Number of knapsack constraints %d\n", num_knapsack_constraints); `#endif`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 760 - 763, The debug log printing of knapsack constraints is unconditionally enabled by the `#if` 1 block; remove or gate it behind a proper runtime or compile-time debug flag. Replace the `#if 1` wrapper around the `i_t num_knapsack_constraints = knapsack_constraints_.size(); settings.log.printf("Number of knapsack constraints %d\n", num_knapsack_constraints);` lines with either removal of the printf entirely or a conditional that checks a debug/logging setting (e.g., use existing verbosity/log level or a DEBUG_COMPILE flag) so `settings.log.printf` is only called when debugging is enabled; ensure references to `knapsack_constraints_`, `num_knapsack_constraints`, and `settings.log.printf` are updated consistently.
1040-1051: Consider clearingworkspace_after use to prevent potential stale data issues.The
workspace_vector is populated at line 1042 but not cleared after the function completes. While the current logic correctly only reads values that were set in the same call, adding cleanup would be a defensive measure against future modifications that might inadvertently read stale values.Proposed fix - add cleanup at the end of minimial_cover_and_partition
for (i_t j : cover_indicies) { if (xstar[j] > 1.0 - x_tol) { c2_partition.push_back(j); } else { c1_partition.push_back(j); } } + + // Clear workspace_ to prevent stale data in future calls + for (i_t k = 0; k < knapsack_inequality.size(); k++) { + const i_t j = knapsack_inequality.index(k); + workspace_[j] = 0.0; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 1040 - 1051, The workspace_ vector is populated from knapsack_inequality inside minimal_cover_and_partition but never cleared, risking stale values later; after you finish using workspace_ (after building score, cover_indicies, cover_coefficients), reset the entries you wrote—iterate over knapsack_inequality (or over the same indices collected) and set workspace_[j] back to zero (or otherwise clear them) so workspace_ does not retain stale data; reference the workspace_ write site (workspace_[j] = knapsack_inequality.coeff(k)) and perform the cleanup there at function end.
1259-1273: Debug logging left unconditionally enabled in lifting code.Lines 1259 and 1273 unconditionally log information about lifted variables. This should be conditional on a verbose flag or debug mode to avoid excessive output in production.
Proposed fix
if (alpha_k > 0.0) { - settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k); + if (verbose) { settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k); } F.push_back(k); alpha.push_back(alpha_k); values.push_back(alpha_k); weights.push_back(a_k); lifted_cut.vector.i.push_back(k); lifted_cut.vector.x.push_back(alpha_k); } // Remove the variable from the remaining variables and coefficients remaining_variables.pop_back(); remaining_coefficients.pop_back(); } - settings_.log.printf("Lifted %ld variables\n", F.size()); + if (verbose) { settings_.log.printf("Lifted %ld variables\n", F.size()); }Note: You'll need to add
const bool verbose = false;at the start oflift_knapsack_cutsimilar to other methods.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 1259 - 1273, The unconditional debug prints in lift_knapsack_cut (the settings_.log.printf calls that report "Lifted variable ..." and "Lifted %ld variables") should be guarded by a verbose flag; add a local const bool verbose = false at the start of lift_knapsack_cut (matching other methods) and wrap the two settings_.log.printf calls in if (verbose) { ... } so logging only occurs when verbose is enabled; leave the logic that updates F, alpha, values, weights and lifted_cut.vector unchanged.cpp/src/cuts/cuts.hpp (1)
313-318: Typo in method name:minimial_cover_and_partitionshould beminimal_cover_and_partition.The method name contains a spelling error that should be corrected for consistency and readability.
Proposed fix
- void minimial_cover_and_partition(const inequality_t<i_t, f_t>& knapsack_inequality, + void minimal_cover_and_partition(const inequality_t<i_t, f_t>& knapsack_inequality, const inequality_t<i_t, f_t>& negated_base_cut, const std::vector<f_t>& xstar, inequality_t<i_t, f_t>& minimal_cover_cut, std::vector<i_t>& c1_partition, std::vector<i_t>& c2_partition);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.hpp` around lines 313 - 318, The declaration of minimial_cover_and_partition has a typo in its name; rename the function to minimal_cover_and_partition consistently across the codebase (declaration in cuts.hpp and any definitions/usages) to fix the misspelling, update all references (calls, forward declarations, and the corresponding implementation function name) so signature and linkage remain identical (inequality_t<i_t, f_t>& knapsack_inequality, inequality_t<i_t, f_t>& negated_base_cut, const std::vector<f_t>& xstar, inequality_t<i_t, f_t>& minimal_cover_cut, std::vector<i_t>& c1_partition, std::vector<i_t>& c2_partition).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/cuts/cuts.hpp`:
- Around line 340-343: The reset coverage misses clearing workspace_ and
complemented_xstar_ between generate_knapsack_cut() calls on the reused
knapsack_generation_t instance; update the reset pattern so after each
generate_knapsack_cut() invocation you explicitly clear or reinitialize
workspace_ and complemented_xstar_ (analogous to how is_marked_ is reset in
lift_knapsack_cut() and how restore_complemented() handles is_complemented_),
ensuring no helper function reads stale values from workspace_ or
complemented_xstar_ between calls.
---
Outside diff comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 2430-2436: The code computes scalar using gcd_numerators =
gcd(numerators) and then divides by it, which can be zero when numerators is
empty; guard this by checking numerators.empty() or gcd_numerators == 0 (from
the variables numerators and gcd_numerators) before computing scalar (the
static_cast division) and return false (or handle appropriately) to avoid
division by zero in the function containing this logic in cuts.cpp.
---
Nitpick comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1018-1019: The implementation method name
knapsack_generation_t<i_t, f_t>::minimial_cover_and_partition is misspelled;
rename the definition to knapsack_generation_t<i_t,
f_t>::minimal_cover_and_partition to match the header and any declarations.
Update any internal references, declarations, or calls (including template
instantiations or overrides) that use the old name to the corrected
minimal_cover_and_partition so signatures and linkage remain consistent.
- Around line 1480-1481: The logging for knapsack cut generation is left
always-on due to the hardcoded "1 ||" in the condition; update the if-statement
(around the block that calls settings.log.printf and uses cut_generation_time)
to remove the "1 ||" debug override so it only logs when the intended condition
(e.g., cut_generation_time > 1.0) or an appropriate verbose/debug flag is true;
ensure you modify the if condition that surrounds settings.log.printf rather
than changing the printf call itself.
- Around line 760-763: The debug log printing of knapsack constraints is
unconditionally enabled by the `#if` 1 block; remove or gate it behind a proper
runtime or compile-time debug flag. Replace the `#if 1` wrapper around the `i_t
num_knapsack_constraints = knapsack_constraints_.size();
settings.log.printf("Number of knapsack constraints %d\n",
num_knapsack_constraints);` lines with either removal of the printf entirely or
a conditional that checks a debug/logging setting (e.g., use existing
verbosity/log level or a DEBUG_COMPILE flag) so `settings.log.printf` is only
called when debugging is enabled; ensure references to `knapsack_constraints_`,
`num_knapsack_constraints`, and `settings.log.printf` are updated consistently.
- Around line 1040-1051: The workspace_ vector is populated from
knapsack_inequality inside minimal_cover_and_partition but never cleared,
risking stale values later; after you finish using workspace_ (after building
score, cover_indicies, cover_coefficients), reset the entries you wrote—iterate
over knapsack_inequality (or over the same indices collected) and set
workspace_[j] back to zero (or otherwise clear them) so workspace_ does not
retain stale data; reference the workspace_ write site (workspace_[j] =
knapsack_inequality.coeff(k)) and perform the cleanup there at function end.
- Around line 1259-1273: The unconditional debug prints in lift_knapsack_cut
(the settings_.log.printf calls that report "Lifted variable ..." and "Lifted
%ld variables") should be guarded by a verbose flag; add a local const bool
verbose = false at the start of lift_knapsack_cut (matching other methods) and
wrap the two settings_.log.printf calls in if (verbose) { ... } so logging only
occurs when verbose is enabled; leave the logic that updates F, alpha, values,
weights and lifted_cut.vector unchanged.
In `@cpp/src/cuts/cuts.hpp`:
- Around line 313-318: The declaration of minimial_cover_and_partition has a
typo in its name; rename the function to minimal_cover_and_partition
consistently across the codebase (declaration in cuts.hpp and any
definitions/usages) to fix the misspelling, update all references (calls,
forward declarations, and the corresponding implementation function name) so
signature and linkage remain identical (inequality_t<i_t, f_t>&
knapsack_inequality, inequality_t<i_t, f_t>& negated_base_cut, const
std::vector<f_t>& xstar, inequality_t<i_t, f_t>& minimal_cover_cut,
std::vector<i_t>& c1_partition, std::vector<i_t>& c2_partition).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c4f10f6a-923f-462e-9038-b017d90ece30
📒 Files selected for processing (2)
cpp/src/cuts/cuts.cppcpp/src/cuts/cuts.hpp
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cpp/src/cuts/cuts.cpp (2)
1298-1305:⚠️ Potential issue | 🟡 MinorPotential division by zero in greedy knapsack ratio computation.
At line 1301,
ratios[i] = values[i] / weights[i]performs division without checking ifweights[i]is zero. While the calling context typically ensures non-zero weights, a defensive check would prevent crashes from unexpected inputs.Proposed fix
std::vector<f_t> ratios; ratios.resize(n); for (i_t i = 0; i < n; i++) { + if (weights[i] == 0.0) { + ratios[i] = (values[i] >= 0.0) ? std::numeric_limits<f_t>::infinity() : -std::numeric_limits<f_t>::infinity(); + } else { ratios[i] = values[i] / weights[i]; + } }As per coding guidelines, "prevent overflow/underflow, precision loss, division by zero/near-zero".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 1298 - 1305, The computation of ratios (ratios[i] = values[i] / weights[i]) can divide by zero; add a defensive check before computing each ratio in the loop that inspects weights[i] (or near-zero) and handles it deterministically — e.g. if weights[i] == 0 then set ratios[i] to std::numeric_limits<f_t>::infinity() for positive values[i], -infinity() for negative values[i], and 0 for values[i]==0 (or otherwise skip/flag the item); keep the rest of the logic (perm and the std::sort lambda comparing ratios) unchanged so sorting remains well-defined.
2456-2464:⚠️ Potential issue | 🟠 MajorInteger overflow risk in
lcmcomputation causes undefined behavior.The
lcmfunction usesstd::lcm, which has undefined behavior when the result overflowsint64_t. The denominators are constrained to ≤1000 byrational_approximation, but the LCM of multiple coprime denominators grows exponentially. For example, the LCM of the first 16 primes (~3.26×10¹⁹) exceedsint64_tmax (~9.22×10¹⁸). With realistic constraints containing 15+ integer variables, each with different denominators, overflow is possible.The check at line 2438 (
scalar > 1000) guards the final result but occurs after the potentially overflowing LCM computation, leaving undefined behavior uncaught.Proposed safer implementation
int64_t lcm(const std::vector<int64_t>& integers) { if (integers.empty()) { return 0; } - int64_t result = - std::reduce(std::next(integers.begin()), integers.end(), integers[0], [](int64_t a, int64_t b) { - return std::lcm(a, b); - }); + int64_t result = integers[0]; + constexpr int64_t MAX_LCM = 1'000'000'000LL; // Reasonable bound + for (size_t i = 1; i < integers.size(); ++i) { + int64_t g = std::gcd(result, integers[i]); + int64_t factor = integers[i] / g; + if (result > MAX_LCM / factor) { + return 0; // Signal overflow + } + result = result * factor; + } return result; }Per coding guidelines: "Check numerical stability: prevent overflow/underflow."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 2456 - 2464, The current lcm function uses std::lcm which can overflow int64_t and cause UB; replace the reduce/std::lcm implementation in lcm(...) with an overflow-safe accumulator: for each denominator d compute g = std::gcd(acc, d), compute acc_div = acc / g and before multiplying check if acc_div > std::numeric_limits<int64_t>::max() / d (or if you want to enforce the existing limit, check if acc_div * d > 1000) and if so return an early sentinel (e.g., a value >1000 or propagate an error) to avoid UB; ensure this new behavior aligns with the later scalar >1000 check so overflow is detected before any undefined multiplication and replace references to std::lcm in the function with this gcd+safe-multiply logic.
🧹 Nitpick comments (4)
cpp/src/cuts/cuts.cpp (4)
1019-1019: Typos in function and variable names.Two typos in this function:
- Function name
minimial_cover_and_partitionshould beminimal_cover_and_partition(line 1019)- Variable name
permuationshould bepermutation(line 1063)These don't affect functionality but reduce code readability and searchability.
Also applies to: 1063-1063
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` at line 1019, Rename the misspelled function and variable to improve readability: change the function name knapsack_generation_t::minimial_cover_and_partition to minimal_cover_and_partition (update its declaration, definition, and any call sites) and rename the variable permuation to permutation wherever it appears (e.g., inside the function body around the former line 1063 and any references). Ensure you update any related comments and tests to use the corrected identifiers so all references remain consistent.
1263-1263: Unconditional debug logging in hot path.Line 1263 uses
settings_.log.printfunconditionally for every lifted variable. This will generate excessive log output during normal operation, potentially impacting performance.Proposed fix
if (alpha_k > 0.0) { - settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k); + if (verbose) { settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k); } F.push_back(k);And define
const bool verbose = false;at the start of the function (similar to other functions in this file).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` at line 1263, The unconditional log call settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k) is in a hot path and should be gated by a verbosity flag; declare a const bool verbose = false; at the start of the function (matching other functions in this file) and wrap the printf in an if (verbose) { ... } block so the message for lifted variable k and alpha_k only prints when verbose is enabled.
1276-1276: Unconditional debug logging should be conditional.Line 1276 unconditionally logs the number of lifted variables. This should be conditional on a verbose flag or log level to avoid noise in production.
- settings_.log.printf("Lifted %ld variables\n", F.size()); + if (verbose) { settings_.log.printf("Lifted %ld variables\n", F.size()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` at line 1276, The unconditional call to settings_.log.printf("Lifted %ld variables\n", F.size()) should be guarded by the appropriate verbosity/log level check: wrap this printf with the existing verbose flag or log-level API (e.g., settings_.verbose(), settings_.log.isVerbose(), or settings_.log.level >= DEBUG) so it only executes when detailed logging is enabled; keep the message and F.size() as-is but emit it conditionally near the same location inside the function containing settings_.log.printf.
1483-1485: Debug logging is unconditionally enabled.The condition
if (1 || cut_generation_time > 1.0)is always true, forcing knapsack cut generation time to be printed on every invocation. This appears to be intentional debug code that should be either removed or controlled via a setting/flag before production release.Similarly,
#if 1at line 760 forces knapsack constraint count to always print.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 1483 - 1485, The unconditional debug prints should be removed or gated behind a logging/verbosity flag: replace the always-true condition around settings.log.printf that references cut_generation_time with a check against the existing runtime logging/verbosity setting (e.g., only print if settings.verbosity or settings.log.isDebug/isVerbose is enabled) and similarly change the `#if 1` block that forces printing of knapsack constraint count to either be removed or wrapped by the same configurable condition or a dedicated compile-time debug macro; locate the prints by the symbols settings.log.printf and cut_generation_time and the `#if 1` knapsack constraint count block and gate or remove them so production runs do not always emit these messages.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1040-1051: The loop computing score uses workspace_[j] without
guarding against zero and workspace_ isn't cleared between calls; inside
minimial_cover_and_partition ensure workspace_ is initialized/cleared for all
relevant indices before populating it from knapsack_inequality (or track and
zero only previously touched entries), and when computing score for each j from
base_cut check workspace_[j] for zero or near-zero (e.g., skip the index,
log/continue, or handle as infinite score) instead of dividing by zero; update
the code paths that push into score, cover_indicies, and cover_coefficients to
only use non-zero workspace_ entries (or handle them deterministically) and
reset/zero workspace_ after use to avoid stale values on subsequent calls.
---
Outside diff comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1298-1305: The computation of ratios (ratios[i] = values[i] /
weights[i]) can divide by zero; add a defensive check before computing each
ratio in the loop that inspects weights[i] (or near-zero) and handles it
deterministically — e.g. if weights[i] == 0 then set ratios[i] to
std::numeric_limits<f_t>::infinity() for positive values[i], -infinity() for
negative values[i], and 0 for values[i]==0 (or otherwise skip/flag the item);
keep the rest of the logic (perm and the std::sort lambda comparing ratios)
unchanged so sorting remains well-defined.
- Around line 2456-2464: The current lcm function uses std::lcm which can
overflow int64_t and cause UB; replace the reduce/std::lcm implementation in
lcm(...) with an overflow-safe accumulator: for each denominator d compute g =
std::gcd(acc, d), compute acc_div = acc / g and before multiplying check if
acc_div > std::numeric_limits<int64_t>::max() / d (or if you want to enforce the
existing limit, check if acc_div * d > 1000) and if so return an early sentinel
(e.g., a value >1000 or propagate an error) to avoid UB; ensure this new
behavior aligns with the later scalar >1000 check so overflow is detected before
any undefined multiplication and replace references to std::lcm in the function
with this gcd+safe-multiply logic.
---
Nitpick comments:
In `@cpp/src/cuts/cuts.cpp`:
- Line 1019: Rename the misspelled function and variable to improve readability:
change the function name knapsack_generation_t::minimial_cover_and_partition to
minimal_cover_and_partition (update its declaration, definition, and any call
sites) and rename the variable permuation to permutation wherever it appears
(e.g., inside the function body around the former line 1063 and any references).
Ensure you update any related comments and tests to use the corrected
identifiers so all references remain consistent.
- Line 1263: The unconditional log call settings_.log.printf("Lifted variable %d
with alpha %g\n", k, alpha_k) is in a hot path and should be gated by a
verbosity flag; declare a const bool verbose = false; at the start of the
function (matching other functions in this file) and wrap the printf in an if
(verbose) { ... } block so the message for lifted variable k and alpha_k only
prints when verbose is enabled.
- Line 1276: The unconditional call to settings_.log.printf("Lifted %ld
variables\n", F.size()) should be guarded by the appropriate verbosity/log level
check: wrap this printf with the existing verbose flag or log-level API (e.g.,
settings_.verbose(), settings_.log.isVerbose(), or settings_.log.level >= DEBUG)
so it only executes when detailed logging is enabled; keep the message and
F.size() as-is but emit it conditionally near the same location inside the
function containing settings_.log.printf.
- Around line 1483-1485: The unconditional debug prints should be removed or
gated behind a logging/verbosity flag: replace the always-true condition around
settings.log.printf that references cut_generation_time with a check against the
existing runtime logging/verbosity setting (e.g., only print if
settings.verbosity or settings.log.isDebug/isVerbose is enabled) and similarly
change the `#if 1` block that forces printing of knapsack constraint count to
either be removed or wrapped by the same configurable condition or a dedicated
compile-time debug macro; locate the prints by the symbols settings.log.printf
and cut_generation_time and the `#if 1` knapsack constraint count block and gate
or remove them so production runs do not always emit these messages.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c6ad4053-8345-4c10-a50b-667e85f8d3e7
📒 Files selected for processing (1)
cpp/src/cuts/cuts.cpp
…. This allows rows we have examined to be choosen in aggregation. Helps on dcmulti and a1c1s1.
|
@coderabbitai resume |
✅ Actions performedReviews resumed. |
|
/ok to test |
@chris-maes, there was an error processing your request: See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/1/ |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/branch_and_bound/branch_and_bound.cpp (1)
1603-1608:⚠️ Potential issue | 🟡 MinorMissing
CONCURRENT_LIMIThandling in dive_with.Unlike
plunge_with(lines 1474-1476),dive_withdoesn't handledual::status_t::CONCURRENT_LIMIT. When the gap tolerance is met andnode_concurrent_halt_is signaled, diving workers should also respond by breaking out of the loop gracefully.🔧 Proposed fix to handle CONCURRENT_LIMIT
if (lp_status == dual::status_t::TIME_LIMIT) { solver_status_ = mip_status_t::TIME_LIMIT; break; + } else if (lp_status == dual::status_t::CONCURRENT_LIMIT) { + break; } else if (lp_status == dual::status_t::ITERATION_LIMIT) { break; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 1603 - 1608, dive_with currently checks for dual::status_t::TIME_LIMIT and ITERATION_LIMIT but omits handling dual::status_t::CONCURRENT_LIMIT; update the dive_with loop to mirror plunge_with by treating CONCURRENT_LIMIT like TIME_LIMIT when node_concurrent_halt_ is set: set solver_status_ = mip_status_t::TIME_LIMIT (or the appropriate concurrent halt status if used elsewhere) and break when lp_status == dual::status_t::CONCURRENT_LIMIT and node_concurrent_halt_ is true, referencing dive_with, plunge_with, dual::status_t::CONCURRENT_LIMIT, node_concurrent_halt_, and solver_status_ to locate and modify the logic.
🧹 Nitpick comments (4)
cpp/src/branch_and_bound/branch_and_bound.cpp (2)
2278-2281: Debug printing unconditionally enabled.The
#if 1enables cut pool type printing for all builds. Based on the previous#ifdef PRINT_CUT_POOL_TYPESpattern, this appears to be debug code. Consider reverting to conditional compilation before merging to avoid unnecessary console output in production.♻️ Revert to conditional compilation
-#if 1 +#ifdef PRINT_CUT_POOL_TYPES cut_pool.print_cutpool_types(); print_cut_types("In LP ", cut_types, settings_); printf("Cut pool size: %d\n", cut_pool.pool_size());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 2278 - 2281, The debug prints are unconditionally enabled by the `#if 1` around calls to `cut_pool.print_cutpool_types()`, `print_cut_types("In LP ", cut_types, settings_)` and `printf("Cut pool size: %d\n", cut_pool.pool_size())`; revert this to conditional compilation (e.g. `#ifdef PRINT_CUT_POOL_TYPES` / `#endif`) or remove the block so these debug messages are only compiled when the debug flag is set, keeping the same calls and spacing but guarded by the feature macro instead of `#if 1`.
1450-1450: Stale upper bound used for early cutoff check.At line 1450, the cutoff check uses the local
upper_boundvariable computed at line 1433-1434, which is only updated after each node is processed (line 1524). If another thread found a better solution during this node's processing, this check might not catch it. However, the actual LP cutoff insolve_node_lp(line 1339-1341) uses the atomicupper_bound_, so this is a minor inefficiency rather than a correctness issue - at worst, a node that could be pruned earlier will still be pruned by the LP solver.♻️ Optional: Use atomic upper_bound_ for fresher cutoff
- if (node_ptr->lower_bound > upper_bound) { + if (node_ptr->lower_bound > upper_bound_.load()) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` at line 1450, The early cutoff uses the stale local variable upper_bound (checked in the if at node_ptr->lower_bound > upper_bound) which may not reflect more recent solutions stored in the atomic upper_bound_; update the cutoff to read the freshest value by using the atomic upper_bound_ (or load it into a local fresh_upper = upper_bound_.load()) before the lower_bound check in the same scope where node_ptr->lower_bound is examined (the block around the if at line 1450), so the comparison uses up-to-date global bests similar to how solve_node_lp consults upper_bound_.cpp/src/cuts/cuts.cpp (2)
1607-1607: Minor: inconsistent type comparison.
sum_valueisi_t(integer), but it's compared to0.0(floating point). While this works, comparing to integer0would be more consistent.- if (sum_value <= 0.0 || sum_value >= max_size) { + if (sum_value <= 0 || sum_value >= max_size) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` at line 1607, The comparison uses sum_value (type i_t) against a floating literal 0.0; change the literal to the integer 0 (i.e. use `0`) or otherwise make both operands the same type so the condition `if (sum_value <= 0 || sum_value >= max_size)` compares integers consistently; update the expression referencing sum_value and max_size accordingly.
1412-1412: Use portable format specifier forsize_t.
F.size()returnssize_t, but%ldexpectslong. This mismatch can cause incorrect output or warnings on platforms wheresize_tandlongdiffer in size (e.g., 64-bit Windows).Suggested fix
- settings_.log.printf("Lifted %ld variables\n", F.size()); + settings_.log.printf("Lifted %zu variables\n", F.size());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` at line 1412, The printf call uses "%ld" but passes F.size() (a size_t), causing a type mismatch; update the call in settings_.log.printf to use the proper size_t format specifier (e.g., "%zu") or cast F.size() to an explicitly sized integer type and match the format (e.g., static_cast<unsigned long long>(F.size()) with "%llu"); modify the invocation of settings_.log.printf("Lifted %ld variables\n", F.size()) accordingly so the format specifier matches the type returned by F.size().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1803-1804: The printed timing message is guarded by a leftover
debug condition `if (1 || cut_generation_time > 1.0)` which always evaluates
true; remove the `1 ||` so the condition reads `if (cut_generation_time > 1.0)`
(or adjust threshold if you intended a different cutoff) so that the call to
settings.log.printf("Knapsack cut generation time %.2f seconds\n",
cut_generation_time) only runs when the generation time exceeds the intended
limit.
- Around line 626-638: In the duplicate-detection loop in cuts.cpp (the block
using variables a_rj, f_r, f_i, a_ij, sets and new_set) add a defensive check
before computing val = (a_rj / f_r) * (f_i / a_ij): ensure f_r and a_ij (and
optionally f_i) are not zero or near-zero (use a small epsilon, e.g. 1e-12, with
fabs) and skip this iteration if any are too small to avoid division-by-zero;
keep the existing sign guard (f_r > 0 && f_i > 0) but move the non-zero checks
to occur first so the division is always safe.
In `@cpp/tests/mip/cuts_test.cu`:
- Around line 972-1019: The test test_duplicate_cuts_detection currently only
calls cut_pool.check_for_duplicate_cuts() without verifying outcomes; add
assertions that validate the cut pool state after duplicate-removal (e.g.,
expected retained cut count and that the strongest representative is kept).
Modify the test to query cut_pool (use its public accessors or add temporary
getters if missing) after calling check_for_duplicate_cuts() to assert the
number of cuts equals the expected value and that specific inequalities
(constructed from inequality_t entries added via add_cut) remain or were removed
as intended; ensure assertions reference cut_pool, check_for_duplicate_cuts, and
the original cuts (cut1..cut8) so the test will fail on regressions.
---
Outside diff comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 1603-1608: dive_with currently checks for
dual::status_t::TIME_LIMIT and ITERATION_LIMIT but omits handling
dual::status_t::CONCURRENT_LIMIT; update the dive_with loop to mirror
plunge_with by treating CONCURRENT_LIMIT like TIME_LIMIT when
node_concurrent_halt_ is set: set solver_status_ = mip_status_t::TIME_LIMIT (or
the appropriate concurrent halt status if used elsewhere) and break when
lp_status == dual::status_t::CONCURRENT_LIMIT and node_concurrent_halt_ is true,
referencing dive_with, plunge_with, dual::status_t::CONCURRENT_LIMIT,
node_concurrent_halt_, and solver_status_ to locate and modify the logic.
---
Nitpick comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 2278-2281: The debug prints are unconditionally enabled by the
`#if 1` around calls to `cut_pool.print_cutpool_types()`, `print_cut_types("In
LP ", cut_types, settings_)` and `printf("Cut pool size: %d\n",
cut_pool.pool_size())`; revert this to conditional compilation (e.g. `#ifdef
PRINT_CUT_POOL_TYPES` / `#endif`) or remove the block so these debug messages
are only compiled when the debug flag is set, keeping the same calls and spacing
but guarded by the feature macro instead of `#if 1`.
- Line 1450: The early cutoff uses the stale local variable upper_bound (checked
in the if at node_ptr->lower_bound > upper_bound) which may not reflect more
recent solutions stored in the atomic upper_bound_; update the cutoff to read
the freshest value by using the atomic upper_bound_ (or load it into a local
fresh_upper = upper_bound_.load()) before the lower_bound check in the same
scope where node_ptr->lower_bound is examined (the block around the if at line
1450), so the comparison uses up-to-date global bests similar to how
solve_node_lp consults upper_bound_.
In `@cpp/src/cuts/cuts.cpp`:
- Line 1607: The comparison uses sum_value (type i_t) against a floating literal
0.0; change the literal to the integer 0 (i.e. use `0`) or otherwise make both
operands the same type so the condition `if (sum_value <= 0 || sum_value >=
max_size)` compares integers consistently; update the expression referencing
sum_value and max_size accordingly.
- Line 1412: The printf call uses "%ld" but passes F.size() (a size_t), causing
a type mismatch; update the call in settings_.log.printf to use the proper
size_t format specifier (e.g., "%zu") or cast F.size() to an explicitly sized
integer type and match the format (e.g., static_cast<unsigned long
long>(F.size()) with "%llu"); modify the invocation of
settings_.log.printf("Lifted %ld variables\n", F.size()) accordingly so the
format specifier matches the type returned by F.size().
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: cd8ee36a-1ba6-4caa-94a8-a56099c60dac
📒 Files selected for processing (16)
cpp/include/cuopt/linear_programming/constants.hcpp/include/cuopt/linear_programming/mip/solver_settings.hppcpp/src/branch_and_bound/branch_and_bound.cppcpp/src/branch_and_bound/branch_and_bound.hppcpp/src/cuts/cuts.cppcpp/src/cuts/cuts.hppcpp/src/dual_simplex/phase2.cppcpp/src/dual_simplex/simplex_solver_settings.hppcpp/src/dual_simplex/solve.cppcpp/src/math_optimization/solver_settings.cucpp/src/mip_heuristics/diversity/lns/rins.cucpp/src/mip_heuristics/diversity/recombiners/sub_mip.cuhcpp/src/mip_heuristics/presolve/conflict_graph/clique_table.cucpp/src/mip_heuristics/presolve/third_party_presolve.cppcpp/src/mip_heuristics/solver.cucpp/tests/mip/cuts_test.cu
✅ Files skipped from review due to trivial changes (1)
- cpp/include/cuopt/linear_programming/constants.h
| TEST(cuts, test_duplicate_cuts_detection) | ||
| { | ||
| dual_simplex::simplex_solver_settings_t<int, double> settings; | ||
| dual_simplex::cut_pool_t<int, double> cut_pool(4, settings); | ||
| dual_simplex::inequality_t<int, double> cut1; | ||
| cut1.push_back(0, 1.0); | ||
| cut1.push_back(1, 2.0); | ||
| cut1.rhs = 1.0; | ||
| cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut1); | ||
| dual_simplex::inequality_t<int, double> cut2; | ||
| cut2.push_back(0, 2.0); | ||
| cut2.push_back(1, 4.0); | ||
| cut2.rhs = 2.0; | ||
| cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut2); | ||
| dual_simplex::inequality_t<int, double> cut3; | ||
| cut3.push_back(0, 0.1); | ||
| cut3.push_back(2, 0.2); | ||
| cut3.rhs = 1.0; | ||
| cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut3); | ||
| dual_simplex::inequality_t<int, double> cut4; | ||
| cut4.push_back(0, 0.2); | ||
| cut4.push_back(2, 0.4); | ||
| cut4.rhs = 1.0; | ||
| cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut4); | ||
| dual_simplex::inequality_t<int, double> cut5; | ||
| cut5.push_back(1, 10.0); | ||
| cut5.push_back(3, 20.0); | ||
| cut5.rhs = 0.1; | ||
| cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut5); | ||
| dual_simplex::inequality_t<int, double> cut6; | ||
| cut6.push_back(1, 20.0); | ||
| cut6.push_back(3, 40.0); | ||
| cut6.rhs = 0.2; | ||
| cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut6); | ||
| dual_simplex::inequality_t<int, double> cut7; | ||
| cut7.push_back(0, 1.0); | ||
| cut7.push_back(1, 1.0); | ||
| cut7.push_back(2, 1.0); | ||
| cut7.push_back(3, 1.0); | ||
| cut7.rhs = 1.0; | ||
| cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut7); | ||
| dual_simplex::inequality_t<int, double> cut8; | ||
| cut8.push_back(1, 3.0); | ||
| cut8.rhs = 7.0; | ||
| cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut8); | ||
|
|
||
| cut_pool.check_for_duplicate_cuts(); | ||
| } |
There was a problem hiding this comment.
test_duplicate_cuts_detection lacks assertions and can’t catch regressions.
Lines 972-1019 only exercise the API; they don’t validate that duplicates/weaker proportional cuts were actually removed. Please add explicit postconditions on the resulting cut pool state (e.g., expected retained cut count and/or retained strongest representatives).
As per coding guidelines **/*test*.{cpp,cu,py}: “Write tests validating numerical correctness of optimization results (not just 'runs without error').”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/tests/mip/cuts_test.cu` around lines 972 - 1019, The test
test_duplicate_cuts_detection currently only calls
cut_pool.check_for_duplicate_cuts() without verifying outcomes; add assertions
that validate the cut pool state after duplicate-removal (e.g., expected
retained cut count and that the strongest representative is kept). Modify the
test to query cut_pool (use its public accessors or add temporary getters if
missing) after calling check_for_duplicate_cuts() to assert the number of cuts
equals the expected value and that specific inequalities (constructed from
inequality_t entries added via add_cut) remain or were removed as intended;
ensure assertions reference cut_pool, check_for_duplicate_cuts, and the original
cuts (cut1..cut8) so the test will fail on regressions.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/branch_and_bound/branch_and_bound.cpp (1)
1598-1612:⚠️ Potential issue | 🟠 MajorMissing
CONCURRENT_LIMIThandling indive_with.Unlike
plunge_with(line 1474),dive_withdoes not handledual::status_t::CONCURRENT_LIMIT. When this status is returned, the code proceeds to callupdate_treewhere it falls through to theelsebranch (line 1267) and is incorrectly treated as aNUMERICALissue.Add handling for
CONCURRENT_LIMITto break out of the loop gracefully:Proposed fix
if (lp_status == dual::status_t::TIME_LIMIT) { solver_status_ = mip_status_t::TIME_LIMIT; break; + } else if (lp_status == dual::status_t::CONCURRENT_LIMIT) { + break; } else if (lp_status == dual::status_t::ITERATION_LIMIT) { break; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 1598 - 1612, The dive_with loop is missing handling for dual::status_t::CONCURRENT_LIMIT; add a branch like in plunge_with to detect if lp_status == dual::status_t::CONCURRENT_LIMIT, set solver_status_ = mip_status_t::CONCURRENT_LIMIT (or the appropriate mip_status_t enum value used elsewhere) and break out of the loop before calling update_tree so CONCURRENT_LIMIT is not misclassified by update_tree; modify the block handling lp_status (around the calls to solve_node_lp and update_tree in dive_with) to include this new branch, referencing lp_status, solver_status_, dive_stats and update_tree.
🧹 Nitpick comments (1)
cpp/src/cuts/cuts.cpp (1)
1197-1207: Floating-point equality in sort comparator is acceptable for tie-breaking.The exact equality check
score[a] == score[b]at line 1202 is used only as a tie-breaker to prefer larger coefficients when scores are equal. For sorting purposes, this is safe and doesn't violate strict weak ordering. An epsilon-based comparison would be more robust but isn't critical here since either ordering is valid for near-equal scores.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 1197 - 1207, The comparator in the std::sort using exact equality on score (in the block creating permuation over cover_indicies) can be made more robust: replace the equality branch `score[a] == score[b]` with an epsilon-based check (e.g., fabs(score[a] - score[b]) < EPS) and pick EPS (like 1e-12 or a domain-appropriate tolerance), then tie-break using cover_coefficients[a] > cover_coefficients[b]; update the comparator closure around score and cover_coefficients to use that epsilon or, alternatively, add a short comment next to the comparator explaining why exact equality is acceptable if you choose to keep it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1396-1407: The cast of the floating alpha_k to integer via
static_cast<i_t>(alpha_k) risks truncation due to FP error; change the
conversion in the code that pushes into values (and any other places where
alpha_k becomes an integer, e.g., values.push_back(...) and lifted_cut.vector.x
entries) to use rounding with a small epsilon/robust conversion (e.g.,
std::lround/std::llround or add 0.5 with bounds checks) so 0.9999999 becomes 1,
and also validate/clamp the resulting i_t to the expected range to avoid
overflow/underflow; update any related comments/logging (settings_.log.debug and
uses of F, alpha, values) to reflect the rounded conversion.
---
Outside diff comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 1598-1612: The dive_with loop is missing handling for
dual::status_t::CONCURRENT_LIMIT; add a branch like in plunge_with to detect if
lp_status == dual::status_t::CONCURRENT_LIMIT, set solver_status_ =
mip_status_t::CONCURRENT_LIMIT (or the appropriate mip_status_t enum value used
elsewhere) and break out of the loop before calling update_tree so
CONCURRENT_LIMIT is not misclassified by update_tree; modify the block handling
lp_status (around the calls to solve_node_lp and update_tree in dive_with) to
include this new branch, referencing lp_status, solver_status_, dive_stats and
update_tree.
---
Nitpick comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1197-1207: The comparator in the std::sort using exact equality on
score (in the block creating permuation over cover_indicies) can be made more
robust: replace the equality branch `score[a] == score[b]` with an epsilon-based
check (e.g., fabs(score[a] - score[b]) < EPS) and pick EPS (like 1e-12 or a
domain-appropriate tolerance), then tie-break using cover_coefficients[a] >
cover_coefficients[b]; update the comparator closure around score and
cover_coefficients to use that epsilon or, alternatively, add a short comment
next to the comparator explaining why exact equality is acceptable if you choose
to keep it.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: bbb5a81a-6d20-4756-ad5b-e1f0f707677e
📒 Files selected for processing (3)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/cuts/cuts.cppcpp/src/mip_heuristics/presolve/third_party_presolve.cpp
🚧 Files skipped from review as they are similar to previous changes (1)
- cpp/src/mip_heuristics/presolve/third_party_presolve.cpp
|
/ok to test bf10c68 |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/cuts/cuts.cpp (1)
2773-2792:⚠️ Potential issue | 🟡 MinorDivision by zero when input is empty produces NaN that propagates undetected.
Both
gcdandlcmreturn 0 for empty input vectors. Inrational_coefficients, if the inequality contains no integer variables,numeratorsanddenominatorsremain empty, causing line 2764 to computescalar = 0.0 / 0.0 = NaN. The subsequent checks (scalar < 0andstd::abs(scalar) > 1000) don't catch NaN because NaN comparisons return false, allowingrational_inequality.scale(NaN)to corrupt the inequality.Proposed fix: return early for empty input or add explicit check
int64_t gcd(const std::vector<int64_t>& integers) { - if (integers.empty()) { return 0; } + if (integers.empty()) { return 1; } // Neutral element for GCD in this context int64_t result = integers[0]; for (size_t i = 1; i < integers.size(); ++i) { result = std::gcd(result, integers[i]); } return result; } int64_t lcm(const std::vector<int64_t>& integers) { - if (integers.empty()) { return 0; } + if (integers.empty()) { return 1; } // Neutral element for LCM in this context int64_t result = std::reduce(std::next(integers.begin()), integers.end(), integers[0], [](int64_t a, int64_t b) { return std::lcm(a, b); }); return result; }Alternatively, add an early return in
rational_coefficients:+ if (numerators.empty()) { + // No integer variables to rationalize, return as-is + return true; + } + int64_t gcd_numerators = gcd(numerators); int64_t lcm_denominators = lcm(denominators);As per coding guidelines: "prevent overflow/underflow, precision loss, division by zero/near-zero".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/cuts/cuts.cpp` around lines 2773 - 2792, The code can produce NaN when both numerators and denominators are empty (leading to 0/0); modify rational_coefficients to detect this case and skip scaling: if numerators.empty() or denominators.empty() then return early (or set scalar = 1.0) instead of computing scalar and calling rational_inequality.scale(NaN); alternatively, ensure gcd/lcm do not return 0 for empty inputs (e.g., make lcm/gcd return 1) but the preferred fix is the explicit empty-check in rational_coefficients to avoid division by zero and prevent passing NaN into rational_inequality.scale.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 2773-2792: The code can produce NaN when both numerators and
denominators are empty (leading to 0/0); modify rational_coefficients to detect
this case and skip scaling: if numerators.empty() or denominators.empty() then
return early (or set scalar = 1.0) instead of computing scalar and calling
rational_inequality.scale(NaN); alternatively, ensure gcd/lcm do not return 0
for empty inputs (e.g., make lcm/gcd return 1) but the preferred fix is the
explicit empty-check in rational_coefficients to avoid division by zero and
prevent passing NaN into rational_inequality.scale.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 67c94f55-7b42-489e-a012-4366eb87a7ef
📒 Files selected for processing (2)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/cuts/cuts.cpp
🚧 Files skipped from review as they are similar to previous changes (1)
- cpp/src/branch_and_bound/branch_and_bound.cpp
|
/ok to tes 03cfa1f |
|
/ok to test 03cfa1f |
|
closes #977 |
|
/ok to test 31ce885 |
|
/merge |
This PR fixes several bugs:
This PR add support for implied bound cuts
We make the following changes to improve finding and generating knapsack cuts
We go from 0% gap closed on the publicly available instances of Wolter's main knapsack test set, to geomean of 14.39%. Wolter reports a geomean of gap closed of ~11% on the same subset of her test set.
After changes
On a MIPLIB 5 minute run on GH200