fix(autogram): Type flat_grad_outputs to Tensor | None#440
fix(autogram): Type flat_grad_outputs to Tensor | None#440PierreQuinton wants to merge 6 commits intomainfrom
flat_grad_outputs to Tensor | None#440Conversation
…er `output_spec` from both `autograd.Function` in `ModuleHookManager`.
…FunctionalVJP. `output_spec` now only appears in the hook.
…one in another PR.
…ard` to `Tensor | None`
Codecov Report✅ All modified and coverable lines are covered by tests.
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
|
We need to think a bit more about this. It seems that both |
We have |
|
Well, if the flat_grad_outputs_j_ = [x.unsqueeze(0) for x in flat_grad_outputs_j]is incorrect. I think we should either filter out We don't need to ever call |
|
|
We can receive None grad_output |
Need a test for the case where
JacobianAccumulator.backwardget a None. It would be related to having an output that is either not a Tensor or has no graph.