diff --git a/src/torchjd/autogram/__init__.py b/src/torchjd/autogram/__init__.py index a58f09e3..f51d4aba 100644 --- a/src/torchjd/autogram/__init__.py +++ b/src/torchjd/autogram/__init__.py @@ -1,13 +1,15 @@ """ -The autogram package enables the activation of Gramian-based Jacobian descent on your models (As -described in Section 6 of `Jacobian Descent For Multi-Objective Optimization -`_). It provides a convenient way to modify a model's backward -pass, allowing you to seamlessly integrate multi-objective optimization in your PyTorch code. +The autogram package provides an engine to efficiently compute the Gramian of the Jacobian of a +tensor of outputs (generally losses) with respect to some modules' parameters. This Gramian contains +all the inner products between pairs of gradients, and is thus a sufficient statistic for most +weighting methods. The algorithm is formally defined in Section 6 of `Jacobian Descent For +Multi-Objective Optimization `_). -This method typically provides a memory improvement over the :doc:`autojac <../autojac/index>` -package which typically leads to time improvement. +Due to computing the Gramian iteratively over the layers, without ever having to store the full +Jacobian in memory, this method is much more memory-efficient than +:doc:`autojac <../autojac/index>`, which makes it often much faster. -The exhaustive list of supported Weightings compatible with `autogram` is: +The list of Weightings compatible with ``autogram`` is: * :class:`~torchjd.aggregation.UPGradWeighting` * :class:`~torchjd.aggregation.AlignedMTLWeighting`