docs: clarify PPO entropy metrics in PPO trainer docs#5289
Open
biefan wants to merge 1 commit intohuggingface:mainfrom
Open
docs: clarify PPO entropy metrics in PPO trainer docs#5289biefan wants to merge 1 commit intohuggingface:mainfrom
biefan wants to merge 1 commit intohuggingface:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Clarify the difference between
objective/entropyandpolicy/entropy_avgin the PPO trainer docs.What changed
objective/entropydescription to match the rollout-time computation ((-logprobs).sum(1).mean()).policy/entropy_avgdescription to match the optimization-time entropy computed from logits.Why
Issue #2023 points out that the two entropy metrics had very similar wording, which made interpretation difficult when debugging PPO runs.
Fixes #2023
Note
Low Risk
Low risk documentation-only change that updates metric wording and adds a brief clarification note; no runtime or API behavior is modified.
Overview
Clarifies the PPO trainer metric docs by rewriting the descriptions of
objective/entropy(rollout-time proxy computed from-logprobs) andpolicy/entropy_avg(optimization-time categorical entropy computed fromlogits).Adds an explicit note explaining that these metrics are measured at different phases (rollouts vs. PPO optimization) and therefore are expected to differ.
Written by Cursor Bugbot for commit 8367f1e. This will update automatically on new commits. Configure here.