Skip to content

fix(autogram): Type flat_grad_outputs to Tensor | None#440

Closed
PierreQuinton wants to merge 6 commits intomainfrom
fix-jacobian-accumulator-backward-typing
Closed

fix(autogram): Type flat_grad_outputs to Tensor | None#440
PierreQuinton wants to merge 6 commits intomainfrom
fix-jacobian-accumulator-backward-typing

Conversation

@PierreQuinton
Copy link
Copy Markdown
Contributor

Need a test for the case where JacobianAccumulator.backward get a None. It would be related to having an output that is either not a Tensor or has no graph.

@PierreQuinton PierreQuinton added cc: fix Conventional commit type for bug fixes of the actual library (changes to src). package: autogram labels Oct 1, 2025
@codecov
Copy link
Copy Markdown

codecov bot commented Oct 1, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.

Files with missing lines Coverage Δ
src/torchjd/autogram/_module_hook_manager.py 100.00% <100.00%> (ø)
src/torchjd/autogram/_vjp.py 100.00% <100.00%> (ø)

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@PierreQuinton
Copy link
Copy Markdown
Contributor Author

We need to think a bit more about this. It seems that both VJPs have the same problem when typing with None, which is the technically correct typing. It is not clear how to solve this cleanly, @ValerianRey maybe you have a better idea (note that we must have Tensor | None unless we know it is impossible).

@ValerianRey
Copy link
Copy Markdown
Contributor

Need a test for the case where JacobianAccumulator.backward get a None. It would be related to having an output that is either not a Tensor or has no graph.

We have MultiOutputWithFrozenBranch with batch_dim=None where one of the outputs does not require grad. The corresponding flat_grad_outputs is a Tensor though, not a None

@ValerianRey
Copy link
Copy Markdown
Contributor

Well, if the flat_grad_outputs are allowed to contain None, the unsqueeze:

flat_grad_outputs_j_ = [x.unsqueeze(0) for x in flat_grad_outputs_j]

is incorrect. I think we should either filter out None values before passing the flat_grad_outputs to the VJPs, or filter them in the VJP before doing flat_grad_outputs_j_ = [x.unsqueeze(0) for x in flat_grad_outputs_j].

We don't need to ever call torch.autograd.grad with a None value as grad_output I think, so the typing mistake that they have in torch should not be a problem.

@PierreQuinton
Copy link
Copy Markdown
Contributor Author

  1. Nice, but I think we need to be careful, because we are probably the ones making the output of the module have a graph. If one output of the module has no graph, we still pass it though our autograd.Function, and then autograd will give it a graph. I think this is fine if in the end we end up ignoring its graph, so either this is done automatically by our VJPs or we should filter.
  2. A None will then probably only appear if we have a non-tensor output. I don't think we have a test for that.
  3. Tensor | None is the correct typing for backward. We directly forward the parameter to the VJP so for now, the correct typing for VJPs is with Tensor | None. We could either filter before the VJP, in which case we want to ignore non-Tensor output of the VJP, or we could handle the filtering in the VJPs.

Base automatically changed from make-vjp-take-flat-grad-outputs to main October 1, 2025 10:45
@ValerianRey
Copy link
Copy Markdown
Contributor

We can receive None grad_output

@ValerianRey ValerianRey closed this Oct 1, 2025
@PierreQuinton PierreQuinton deleted the fix-jacobian-accumulator-backward-typing branch October 2, 2025 08:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cc: fix Conventional commit type for bug fixes of the actual library (changes to src). package: autogram

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants