-
Notifications
You must be signed in to change notification settings - Fork 89
feat(evaluation): add VLMMetrics #545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
davidberenstein1957
wants to merge
60
commits into
main
Choose a base branch
from
feat/metrics-vlm-support
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
60 commits
Select commit
Hold shift + click to select a range
b2cd94d
feat(evaluation): add VLM-based metrics with litellm and transformers…
davidberenstein1957 7b08693
fix(evaluation): ARNIQA not in torchmetrics - implement manually
davidberenstein1957 d116038
fix(evaluation): use List-based scores pattern matching Pruna standards
davidberenstein1957 c695c6e
fix(evaluation): use sync completion instead of async acompletion
davidberenstein1957 703a3bb
chore(evaluation): remove ARNIQA from VLM PR - has dedicated PR #547
davidberenstein1957 5edc94d
feat(evaluation): add structured generation to VLM metrics
davidberenstein1957 8f0089f
fix(evaluation): fix linting issues in VLM metrics
davidberenstein1957 7dcd735
fix(evaluation): fix remaining linting issues
davidberenstein1957 e4f29d8
fix(evaluation): fix D205 docstring issues in VLM classes
davidberenstein1957 0bd6d3e
fix(evaluation): fix import sorting in __init__.py
davidberenstein1957 fe8a514
fix(evaluation): skip docstring check for metrics_vlm
davidberenstein1957 f9663a1
fix(evaluation): enhance docstrings for VLM metrics and base classes
davidberenstein1957 636ab33
feat(evaluation): introduce new VLM metrics and integration tests
davidberenstein1957 2153929
Delete docs/VLM_METRICS_PROMPT_COMPARISON.md
davidberenstein1957 d314753
feat(metrics): paper docstring fixes, VQA use_probability default, vl…
davidberenstein1957 4530eda
feat(metrics): enhance metric classes with update and compute docstrings
davidberenstein1957 7ecd362
fix(vlm_base): update response_format type hints for clarity
davidberenstein1957 0c1918b
refactor(vlm_base): simplify response_format check for pydantic usage
davidberenstein1957 c050f5d
fix(vlm_base): add "json" option to response_format type hints
davidberenstein1957 3ed3db9
feat(dependencies): add pruna[evaluation] to dev dependencies
davidberenstein1957 0ca173d
refactor(metrics): improve docstring consistency and formatting acros…
davidberenstein1957 6354d59
refactor(metrics): update response formats and improve utility functions
davidberenstein1957 2bf81e9
refactor(metrics): update collation functions and enhance benchmark t…
davidberenstein1957 2e666e9
refactor(data): update seed parameter handling and add warnings for t…
davidberenstein1957 7e9bb3f
feat(data): enhance OneIG dataset support and add new benchmarks
davidberenstein1957 4f92350
feat(metrics): introduce OneIGTextScoreMetric and enhance TextScoreMe…
davidberenstein1957 7ddffbb
feat(metrics): add OneIGAlignmentMetric for dependency-aware scoring
davidberenstein1957 aaccf53
feat(metrics): add OneIG reasoning metric and enhance dataset support
davidberenstein1957 a7dfadf
fix(evaluation): wire GenEval to qa_accuracy with all-or-nothing; ref…
davidberenstein1957 3196605
refactor(evaluation): drop use_outlines; wire transformers via struct…
davidberenstein1957 68ca980
evaluation: rename vlm_utils, deps, and VLM metric polish
davidberenstein1957 fc64c41
evaluation: require VLM model_name, Task vlm_model_name, rename metri…
davidberenstein1957 d41d64e
style(evaluation): ruff import order and format for metrics
davidberenstein1957 9288baa
style(vendor): ruff fixes for oneig_llm2vec
davidberenstein1957 62a1a25
fix(metrics): handle list text_content; simplify VLM and benchmark tests
davidberenstein1957 3755819
Enhance LLM2Vec class with improved docstrings and error handling
davidberenstein1957 bcc385b
Enhance Llama model classes with improved docstrings and version checks
davidberenstein1957 0e5ea18
Refactor type hints and improve error handling in LLM2Vec and Benchma…
davidberenstein1957 52a87ab
Refactor Llama model imports and enhance docstrings for clarity
davidberenstein1957 b38e291
Refactor dataset setup functions and enhance VLM benchmark integration
davidberenstein1957 fbe3180
fix: apply all_or_nothing aggregation in QAAccuracyMetric.update for …
davidberenstein1957 0091bb1
Remove deprecated VLM benchmark integration module
davidberenstein1957 7d6729c
fix: use > 0.5 threshold in all_or_nothing and clean up test imports
davidberenstein1957 008f6ce
fix: use get_score_from_response in VieScoreMetric instead of private…
davidberenstein1957 7b13787
test: verify VQAScore P(Yes) normalization and SmolVLM yes/no token ids
davidberenstein1957 c1f9d99
test: add grandchild chain test for OneIG dependency masking
davidberenstein1957 a77869d
test: verify ImgEdit prompt routing — instruction flows from x into V…
davidberenstein1957 cd4875a
docs: clarify GEditBench 2-criterion scoring gap in VieScoreMetric an…
davidberenstein1957 d595d2d
test: add parametrized auxiliary structure validation per benchmark
davidberenstein1957 aba500d
test: assert metric results are in [0, 1] range in e2e tests
davidberenstein1957 bb7bd67
fix: normalize TextScoreMetric to [0,1] char accuracy (higher_is_bett…
davidberenstein1957 db77933
fix: sum all yes/no prefix token probs in LitellmVLM logprob scoring …
davidberenstein1957 4a2a054
docs: clarify AlignmentScoreMetric as binary VQAScore variant vs VQAM…
davidberenstein1957 8e731a2
fix: correct token decode, text_content extraction, and JSON binary s…
davidberenstein1957 3bc2d18
test: verify bytes are summarized in _safe_json (not expanded to str …
davidberenstein1957 39f331c
feat: add num_samples and multibatch support to vlm_benchmark_helpers
davidberenstein1957 d9e0c35
Fix ruff linting errors and consolidate VLM benchmark test files
davidberenstein1957 3414ad8
fix: make OneIG category smoke test robust against small sample counts
davidberenstein1957 0c426d2
feat: enhance ImgEdit dataset handling and VLM metrics
davidberenstein1957 dacf17b
feat: enhance VLM metrics documentation and improve prompt structure
davidberenstein1957 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.