Conversation
📝 WalkthroughWalkthroughApply command now accepts a new --dry-run flag, moves stdin non-interactive handling into apply, changes Changes
Sequence Diagram(s)sequenceDiagram
participant CLI
participant Apply
participant Resolver
participant Planner
participant Deployer
participant Node
CLI->>Apply: invoke apply(args, modifiers, goal_fn)
Apply->>Resolver: resolve_targets(on: &[ApplyTarget])
Resolver-->>Apply: returns Names / Nodes
alt modifiers.dry_run == true
Apply->>Planner: dry_activate_nodes(nodes, goal_fn, parallel)
Planner-->>Apply: prints plan per node
Apply-->>CLI: return (no execution)
else modifiers.dry_run == false
Apply->>Deployer: apply_nodes(nodes, parallel, goal_fn)
Deployer->>Node: deploy tasks (parallel)
Node-->>Deployer: status/results
Deployer-->>Apply: aggregated results
Apply-->>CLI: return (execution complete)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/cli/src/apply.rs (1)
55-70:⚠️ Potential issue | 🟠 MajorDo not suppress stdin target read failures.
Line 67 silently ignores
read_apply_targets_from_stdin()errors. That can turn real stdin failures into a successful no-op run. Propagate this error instead.Proposed fix
-fn resolve_targets(on: &[ApplyTarget]) -> (HashSet<String>, HashSet<Name>) { - on.iter().fold( - (HashSet::new(), HashSet::new()), - |(mut tags, mut names), target| { +fn resolve_targets(on: &[ApplyTarget]) -> Result<(HashSet<String>, HashSet<Name>)> { + on.iter().try_fold( + (HashSet::new(), HashSet::new()), + |(mut tags, mut names), target| -> Result<(HashSet<String>, HashSet<Name>)> { match target { ApplyTarget::Tag(tag) => { tags.insert(tag.clone()); } ApplyTarget::Node(name) => { names.insert(name.clone()); } ApplyTarget::Stdin => { - if let Ok((found_tags, found_names)) = read_apply_targets_from_stdin() { - names.extend(found_names); - tags.extend(found_tags); - } + let (found_tags, found_names) = read_apply_targets_from_stdin()?; + names.extend(found_names); + tags.extend(found_tags); } } - (tags, names) + Ok((tags, names)) }, ) } @@ - let (tags, names) = resolve_targets(&args.on); + let (tags, names) = resolve_targets(&args.on)?;Also applies to: 114-114
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/apply.rs` around lines 55 - 70, The stdin branch in resolve_targets currently swallows errors from read_apply_targets_from_stdin(); change resolve_targets to return a Result<(HashSet<String>, HashSet<Name>), E> (use the crate's existing error type) and propagate the read_apply_targets_from_stdin() error instead of ignoring it (use the ? operator or map_err). Update any callers (and the similar stdin-handling fold around line 114) to handle the Result, so failures reading stdin bubble up rather than becoming silent no-ops; reference resolve_targets and read_apply_targets_from_stdin when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@crates/cli/src/apply.rs`:
- Around line 55-70: The stdin branch in resolve_targets currently swallows
errors from read_apply_targets_from_stdin(); change resolve_targets to return a
Result<(HashSet<String>, HashSet<Name>), E> (use the crate's existing error
type) and propagate the read_apply_targets_from_stdin() error instead of
ignoring it (use the ? operator or map_err). Update any callers (and the similar
stdin-handling fold around line 114) to handle the Result, so failures reading
stdin bubble up rather than becoming silent no-ops; reference resolve_targets
and read_apply_targets_from_stdin when making these changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c31888c9-7606-46c6-814c-4d2b8ea25839
📒 Files selected for processing (2)
crates/cli/src/apply.rscrates/cli/src/cli.rs
9f4b09f to
ffc43be
Compare
ffc43be to
ec9ce8f
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (4)
crates/cli/src/apply.rs (4)
169-169: Consider using.expect()with a descriptive message for clearer panic context.While the
unwrap()is safe here becausenamesis derived from filteringhive.nodes, a descriptive message would aid debugging if assumptions change in the future.♻️ Suggested improvement
- let node = hive.nodes.get(name).unwrap(); + let node = hive.nodes.get(name).expect("node must exist in hive");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/apply.rs` at line 169, Replace the bare unwrap() on hive.nodes.get(name) with expect() providing a clear descriptive message (e.g., include the node name and context) so that in the function handling node iteration the failure context is explicit; update the call at the hive.nodes.get(name).unwrap() site (the variable named node) to use expect(...) with a message referencing name and that it was expected to exist because names was derived from filtering hive.nodes.
103-103: TheClonebound onFappears unnecessary.Since the
dry_runbranch returns early, only one ofdry_activate_nodesorapply_nodesis ever called withmake_goal. The closure doesn't need to be cloned.♻️ Suggested simplification
where - F: Fn(&Name, &Node) -> Goal + Clone, + F: Fn(&Name, &Node) -> Goal,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/apply.rs` at line 103, The generic bound on F is overly strict: remove the unnecessary Clone requirement from the type parameter so it reads F: Fn(&Name, &Node) -> Goal; then ensure usages of make_goal (the closure passed to dry_activate_nodes or apply_nodes) are not attempting to clone it—pass it by reference or move it into the called function as these calls are mutually exclusive due to the dry_run early return; update the function signature and call sites (references to make_goal, dry_activate_nodes, apply_nodes) accordingly.
211-211: Minor performance note:names.contains()on a slice is O(n).For each node in
hive.nodes, this performs a linear scan throughnames. For large deployments, consider passingnamesas aHashSet<Name>reference for O(1) lookups.♻️ Suggested optimization
Change the function signature to accept
HashSet<Name>and update the call site:async fn apply_nodes<F>( hive: &mut Hive, - names: &[Name], + names: &HashSet<Name>, make_goal: F, ...At call site in
apply:+ let partitioned_names_set: HashSet<Name> = partitioned_names.iter().cloned().collect(); + apply_nodes( hive, - &partitioned_names, + &partitioned_names_set, make_goal,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/apply.rs` at line 211, The current filter uses names.contains(name) which is O(n) per node; change the function to accept a HashSet<Name> (or &HashSet<Name>) instead of a slice so lookups are O(1), update the call site to build/pass that HashSet, and update the closure in the iterator (the filter over hive.nodes) to use names_set.contains(name) (referencing the new parameter name like names_set or names) so the per-node check is constant time; adjust any callers to construct the HashSet from the original slice before calling the function.
226-228: Change the error log to a warning, or return an error when no nodes are selected.The code logs an error message when no nodes match the selection, but then returns
Ok(())anyway, which is inconsistent. Users see an error message but receive a successful result. Either return an error if this is truly a failure condition, or usewarn!()orinfo!()to clarify that this is not a fatal error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/apply.rs` around lines 226 - 228, The current code logs an error when set.peek().is_none() but continues returning Ok(())—make this consistent by turning the no-nodes case into a real error: replace the error! call and subsequent fall-through with an early return Err(...) using the crate's Result/Error convention (e.g. anyhow::anyhow! or the crate's Error type) so the calling code sees failure when no nodes are selected; locate the check that uses set.peek().is_none() and the error! invocation in apply.rs and change it to return an appropriate Err with a clear message like "no nodes selected for deployment".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@crates/cli/src/apply.rs`:
- Line 169: Replace the bare unwrap() on hive.nodes.get(name) with expect()
providing a clear descriptive message (e.g., include the node name and context)
so that in the function handling node iteration the failure context is explicit;
update the call at the hive.nodes.get(name).unwrap() site (the variable named
node) to use expect(...) with a message referencing name and that it was
expected to exist because names was derived from filtering hive.nodes.
- Line 103: The generic bound on F is overly strict: remove the unnecessary
Clone requirement from the type parameter so it reads F: Fn(&Name, &Node) ->
Goal; then ensure usages of make_goal (the closure passed to dry_activate_nodes
or apply_nodes) are not attempting to clone it—pass it by reference or move it
into the called function as these calls are mutually exclusive due to the
dry_run early return; update the function signature and call sites (references
to make_goal, dry_activate_nodes, apply_nodes) accordingly.
- Line 211: The current filter uses names.contains(name) which is O(n) per node;
change the function to accept a HashSet<Name> (or &HashSet<Name>) instead of a
slice so lookups are O(1), update the call site to build/pass that HashSet, and
update the closure in the iterator (the filter over hive.nodes) to use
names_set.contains(name) (referencing the new parameter name like names_set or
names) so the per-node check is constant time; adjust any callers to construct
the HashSet from the original slice before calling the function.
- Around line 226-228: The current code logs an error when set.peek().is_none()
but continues returning Ok(())—make this consistent by turning the no-nodes case
into a real error: replace the error! call and subsequent fall-through with an
early return Err(...) using the crate's Result/Error convention (e.g.
anyhow::anyhow! or the crate's Error type) so the calling code sees failure when
no nodes are selected; locate the check that uses set.peek().is_none() and the
error! invocation in apply.rs and change it to return an appropriate Err with a
clear message like "no nodes selected for deployment".
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1c16cb38-54ba-48c2-bc0a-a0560e30f20b
📒 Files selected for processing (2)
crates/cli/src/apply.rscrates/cli/src/cli.rs
There was a problem hiding this comment.
🧹 Nitpick comments (1)
crates/cli/src/apply.rs (1)
211-211: Avoid repeated linear lookups when filtering nodes.At Line 211,
names.contains(name)inside iteration is O(nodes × selected_names). Precompute aHashSet<Name>once.Proposed fix
async fn apply_nodes<F>( @@ ) -> Result<()> where F: Fn(&Name, &Node) -> Goal, { + let selected: HashSet<Name> = names.iter().cloned().collect(); + let mut set = hive .nodes .iter_mut() - .filter(|(name, _)| names.contains(name)) + .filter(|(name, _)| selected.contains(name)) .map(|(name, node)| {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/apply.rs` at line 211, The filter currently calls names.contains(name) for each node causing repeated linear scans; fix this by precomputing a HashSet of the selected names (e.g., let names_set: HashSet<_> = names.iter().cloned().collect()) before the iterator and then change the closure to use names_set.contains(name) instead of names.contains(name); locate the iterator that uses .filter(|(name, _)| names.contains(name)) in apply.rs and replace the lookup to use the new HashSet variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@crates/cli/src/apply.rs`:
- Line 211: The filter currently calls names.contains(name) for each node
causing repeated linear scans; fix this by precomputing a HashSet of the
selected names (e.g., let names_set: HashSet<_> =
names.iter().cloned().collect()) before the iterator and then change the closure
to use names_set.contains(name) instead of names.contains(name); locate the
iterator that uses .filter(|(name, _)| names.contains(name)) in apply.rs and
replace the lookup to use the new HashSet variable.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8c943204-e267-4c0a-b7bb-fac19d96880a
📒 Files selected for processing (2)
crates/cli/src/apply.rscrates/cli/src/cli.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/cli/src/cli.rs
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
crates/cli/src/apply.rs (1)
171-179: Consider centralizing plan construction for both paths.These two blocks duplicate the same
make_goal+plan_for_nodesequence. A small shared helper would make it harder for--dry-runoutput and the real apply path to drift apart later.Also applies to: 213-221
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/apply.rs` around lines 171 - 179, Duplicate make_goal + plan_for_node calls are used in both the dry-run and real apply paths; extract a small helper function (e.g., build_plan_for_node or plan_for_name_node) that accepts (name, node, location, modifiers, should_quit) or the minimal set of parameters and returns the Goal and Plan, then replace both call sites to use this helper (invoke make_goal inside the helper and call plan_for_node there). Ensure the helper reuses the same cloning semantics (name.clone(), location.clone(), should_quit.clone()) so behavior is identical for both branches and update both places that currently call make_goal and plan_for_node to call the new helper instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/cli/src/apply.rs`:
- Around line 133-143: The dry-run branch returns early leaving STATUS
populated; before returning from the args.dry_run path in apply (where
dry_activate_nodes(...) is called), drain or clear the global STATUS entries
that were added (the ones added by STATUS.add_many earlier) so they don't
persist and corrupt later output. Locate the dry-run branch around the
args.dry_run check and add a call to the same STATUS cleanup/drain routine used
after real runs (or explicitly call STATUS.clear()/drain method) immediately
before the return to ensure global status is reset.
---
Nitpick comments:
In `@crates/cli/src/apply.rs`:
- Around line 171-179: Duplicate make_goal + plan_for_node calls are used in
both the dry-run and real apply paths; extract a small helper function (e.g.,
build_plan_for_node or plan_for_name_node) that accepts (name, node, location,
modifiers, should_quit) or the minimal set of parameters and returns the Goal
and Plan, then replace both call sites to use this helper (invoke make_goal
inside the helper and call plan_for_node there). Ensure the helper reuses the
same cloning semantics (name.clone(), location.clone(), should_quit.clone()) so
behavior is identical for both branches and update both places that currently
call make_goal and plan_for_node to call the new helper instead.
| if args.dry_run { | ||
| dry_activate_nodes( | ||
| hive, | ||
| &partitioned_names, | ||
| make_goal, | ||
| &location, | ||
| &should_quit, | ||
| modifiers, | ||
| ); | ||
|
|
||
| return Ok(()); |
There was a problem hiding this comment.
Don't exit dry-run with STATUS still populated.
STATUS.add_many(...) has already run before this branch, but the dry-run path returns before anything can drain or clear those entries. That leaves stale global status behind and can corrupt later progress output from the same process.
Suggested fix
- STATUS
- .lock()
- .add_many(&partitioned_names.iter().collect::<Vec<_>>());
if args.dry_run {
dry_activate_nodes(
hive,
&partitioned_names,
make_goal,
&location,
&should_quit,
modifiers,
);
return Ok(());
}
+
+ STATUS
+ .lock()
+ .add_many(&partitioned_names.iter().collect::<Vec<_>>());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/cli/src/apply.rs` around lines 133 - 143, The dry-run branch returns
early leaving STATUS populated; before returning from the args.dry_run path in
apply (where dry_activate_nodes(...) is called), drain or clear the global
STATUS entries that were added (the ones added by STATUS.add_many earlier) so
they don't persist and corrupt later output. Locate the dry-run branch around
the args.dry_run check and add a call to the same STATUS cleanup/drain routine
used after real runs (or explicitly call STATUS.clear()/drain method)
immediately before the return to ensure global status is reset.
closes #401
Summary by CodeRabbit
New Features
Refactor
Bug Fixes