Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
b2cd94d
feat(evaluation): add VLM-based metrics with litellm and transformers…
davidberenstein1957 Feb 21, 2026
7b08693
fix(evaluation): ARNIQA not in torchmetrics - implement manually
davidberenstein1957 Feb 21, 2026
d116038
fix(evaluation): use List-based scores pattern matching Pruna standards
davidberenstein1957 Feb 21, 2026
c695c6e
fix(evaluation): use sync completion instead of async acompletion
davidberenstein1957 Feb 21, 2026
703a3bb
chore(evaluation): remove ARNIQA from VLM PR - has dedicated PR #547
davidberenstein1957 Feb 21, 2026
5edc94d
feat(evaluation): add structured generation to VLM metrics
davidberenstein1957 Feb 21, 2026
8f0089f
fix(evaluation): fix linting issues in VLM metrics
davidberenstein1957 Feb 21, 2026
7dcd735
fix(evaluation): fix remaining linting issues
davidberenstein1957 Feb 21, 2026
e4f29d8
fix(evaluation): fix D205 docstring issues in VLM classes
davidberenstein1957 Feb 21, 2026
0bd6d3e
fix(evaluation): fix import sorting in __init__.py
davidberenstein1957 Feb 21, 2026
fe8a514
fix(evaluation): skip docstring check for metrics_vlm
davidberenstein1957 Feb 21, 2026
f9663a1
fix(evaluation): enhance docstrings for VLM metrics and base classes
davidberenstein1957 Feb 21, 2026
636ab33
feat(evaluation): introduce new VLM metrics and integration tests
davidberenstein1957 Feb 27, 2026
2153929
Delete docs/VLM_METRICS_PROMPT_COMPARISON.md
davidberenstein1957 Feb 27, 2026
d314753
feat(metrics): paper docstring fixes, VQA use_probability default, vl…
davidberenstein1957 Mar 5, 2026
4530eda
feat(metrics): enhance metric classes with update and compute docstrings
davidberenstein1957 Mar 5, 2026
7ecd362
fix(vlm_base): update response_format type hints for clarity
davidberenstein1957 Mar 5, 2026
0c1918b
refactor(vlm_base): simplify response_format check for pydantic usage
davidberenstein1957 Mar 5, 2026
c050f5d
fix(vlm_base): add "json" option to response_format type hints
davidberenstein1957 Mar 5, 2026
3ed3db9
feat(dependencies): add pruna[evaluation] to dev dependencies
davidberenstein1957 Mar 5, 2026
0ca173d
refactor(metrics): improve docstring consistency and formatting acros…
davidberenstein1957 Mar 5, 2026
6354d59
refactor(metrics): update response formats and improve utility functions
davidberenstein1957 Mar 12, 2026
2bf81e9
refactor(metrics): update collation functions and enhance benchmark t…
davidberenstein1957 Mar 17, 2026
2e666e9
refactor(data): update seed parameter handling and add warnings for t…
davidberenstein1957 Mar 19, 2026
7e9bb3f
feat(data): enhance OneIG dataset support and add new benchmarks
davidberenstein1957 Mar 19, 2026
4f92350
feat(metrics): introduce OneIGTextScoreMetric and enhance TextScoreMe…
davidberenstein1957 Mar 19, 2026
7ddffbb
feat(metrics): add OneIGAlignmentMetric for dependency-aware scoring
davidberenstein1957 Mar 19, 2026
aaccf53
feat(metrics): add OneIG reasoning metric and enhance dataset support
davidberenstein1957 Mar 24, 2026
a7dfadf
fix(evaluation): wire GenEval to qa_accuracy with all-or-nothing; ref…
davidberenstein1957 Apr 9, 2026
3196605
refactor(evaluation): drop use_outlines; wire transformers via struct…
davidberenstein1957 Apr 9, 2026
68ca980
evaluation: rename vlm_utils, deps, and VLM metric polish
davidberenstein1957 Apr 9, 2026
fc64c41
evaluation: require VLM model_name, Task vlm_model_name, rename metri…
davidberenstein1957 Apr 9, 2026
d41d64e
style(evaluation): ruff import order and format for metrics
davidberenstein1957 Apr 9, 2026
9288baa
style(vendor): ruff fixes for oneig_llm2vec
davidberenstein1957 Apr 9, 2026
62a1a25
fix(metrics): handle list text_content; simplify VLM and benchmark tests
davidberenstein1957 Apr 9, 2026
3755819
Enhance LLM2Vec class with improved docstrings and error handling
davidberenstein1957 Apr 9, 2026
bcc385b
Enhance Llama model classes with improved docstrings and version checks
davidberenstein1957 Apr 9, 2026
0e5ea18
Refactor type hints and improve error handling in LLM2Vec and Benchma…
davidberenstein1957 Apr 9, 2026
52a87ab
Refactor Llama model imports and enhance docstrings for clarity
davidberenstein1957 Apr 9, 2026
b38e291
Refactor dataset setup functions and enhance VLM benchmark integration
davidberenstein1957 Apr 9, 2026
fbe3180
fix: apply all_or_nothing aggregation in QAAccuracyMetric.update for …
davidberenstein1957 Apr 10, 2026
0091bb1
Remove deprecated VLM benchmark integration module
davidberenstein1957 Apr 10, 2026
7d6729c
fix: use > 0.5 threshold in all_or_nothing and clean up test imports
davidberenstein1957 Apr 10, 2026
008f6ce
fix: use get_score_from_response in VieScoreMetric instead of private…
davidberenstein1957 Apr 10, 2026
7b13787
test: verify VQAScore P(Yes) normalization and SmolVLM yes/no token ids
davidberenstein1957 Apr 10, 2026
c1f9d99
test: add grandchild chain test for OneIG dependency masking
davidberenstein1957 Apr 10, 2026
a77869d
test: verify ImgEdit prompt routing — instruction flows from x into V…
davidberenstein1957 Apr 10, 2026
cd4875a
docs: clarify GEditBench 2-criterion scoring gap in VieScoreMetric an…
davidberenstein1957 Apr 10, 2026
d595d2d
test: add parametrized auxiliary structure validation per benchmark
davidberenstein1957 Apr 10, 2026
aba500d
test: assert metric results are in [0, 1] range in e2e tests
davidberenstein1957 Apr 10, 2026
bb7bd67
fix: normalize TextScoreMetric to [0,1] char accuracy (higher_is_bett…
davidberenstein1957 Apr 10, 2026
db77933
fix: sum all yes/no prefix token probs in LitellmVLM logprob scoring …
davidberenstein1957 Apr 10, 2026
4a2a054
docs: clarify AlignmentScoreMetric as binary VQAScore variant vs VQAM…
davidberenstein1957 Apr 10, 2026
8e731a2
fix: correct token decode, text_content extraction, and JSON binary s…
davidberenstein1957 Apr 12, 2026
3bc2d18
test: verify bytes are summarized in _safe_json (not expanded to str …
davidberenstein1957 Apr 12, 2026
39f331c
feat: add num_samples and multibatch support to vlm_benchmark_helpers
davidberenstein1957 Apr 12, 2026
d9e0c35
Fix ruff linting errors and consolidate VLM benchmark test files
davidberenstein1957 Apr 13, 2026
3414ad8
fix: make OneIG category smoke test robust against small sample counts
davidberenstein1957 Apr 14, 2026
0c426d2
feat: enhance ImgEdit dataset handling and VLM metrics
davidberenstein1957 Apr 15, 2026
dacf17b
feat: enhance VLM metrics documentation and improve prompt structure
davidberenstein1957 Apr 15, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/user_manual/configure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ Underneath you can find the list of all the available datasets.
- ``text: str``
* - Image Generation
- `LAION256 <https://huggingface.co/datasets/nannullna/laion_subset>`_, `OpenImage <https://huggingface.co/datasets/data-is-better-together/open-image-preferences-v1>`_, `COCO <https://huggingface.co/datasets/phiyodr/coco2017>`_, `DrawBench <https://huggingface.co/datasets/sayakpaul/drawbench>`_, `PartiPrompts <https://huggingface.co/datasets/nateraw/parti-prompts>`_, `GenAIBench <https://huggingface.co/datasets/BaiqiL/GenAI-Bench>`_
- ``image_generation_collate``, ``prompt_collate``
- ``image_generation_collate``, ``prompt_with_auxiliaries_collate``
- ``text: str``, ``image: Optional[PIL.Image.Image]``
* - Image Classification
- `ImageNet <https://huggingface.co/datasets/zh-plus/tiny-imagenet>`_, `MNIST <https://huggingface.co/datasets/ylecun/mnist>`_, `CIFAR10 <https://huggingface.co/datasets/uoft-cs/cifar10>`_
Expand Down
10 changes: 9 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ dependencies = [
"peft>=0.18.0",
"trl<=0.21.0",
"termcolor==2.3.0",
"realesrgan"
"realesrgan",
]

[project.optional-dependencies]
Expand All @@ -166,6 +166,10 @@ vllm = [
"vllm>=0.16.0",
"ray",
]
evaluation = [
"outlines>1.2.0,<2.0.0",
"litellm>=1.0.0",
]
Comment thread
davidberenstein1957 marked this conversation as resolved.
stable-fast = [
"xformers>=0.0.30",
"stable-fast-pruna>=1.0.8,<1.0.9",
Expand Down Expand Up @@ -222,6 +226,7 @@ dev = [
"types-PyYAML",
"logbar",
"pytest-xdist>=3.8.0",
"pruna[evaluation]",
]
cpu = []
lmharness = [
Expand All @@ -234,6 +239,9 @@ intel = [
"torch>=2.7.0,<2.9.0",
"torchvision>=0.22.0,<0.24.0",
]
mine-replicate = [
"replicate>=0.26.0",
]

[build-system]
requires = ["hatchling"]
Expand Down
24 changes: 22 additions & 2 deletions src/pruna/data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,13 @@
setup_hps_dataset,
setup_imgedit_dataset,
setup_long_text_bench_dataset,
setup_oneig_anime_stylization_dataset,
setup_oneig_dataset,
setup_oneig_general_object_dataset,
setup_oneig_knowledge_reasoning_dataset,
setup_oneig_multilingualism_dataset,
setup_oneig_portrait_dataset,
setup_oneig_text_rendering_dataset,
setup_parti_prompts_dataset,
)
from pruna.data.datasets.question_answering import setup_polyglot_dataset
Expand Down Expand Up @@ -103,19 +109,33 @@
"image_classification_collate",
{"img_size": 224},
),
"DrawBench": (setup_drawbench_dataset, "prompt_collate", {}),
"DrawBench": (setup_drawbench_dataset, "prompt_with_auxiliaries_collate", {}),
"PartiPrompts": (
setup_parti_prompts_dataset,
"prompt_with_auxiliaries_collate",
{},
),
"GenAIBench": (setup_genai_bench_dataset, "prompt_collate", {}),
"GenAIBench": (setup_genai_bench_dataset, "prompt_with_auxiliaries_collate", {}),
"GenEval": (setup_geneval_dataset, "prompt_with_auxiliaries_collate", {}),
"HPS": (setup_hps_dataset, "prompt_with_auxiliaries_collate", {}),
"ImgEdit": (setup_imgedit_dataset, "prompt_with_auxiliaries_collate", {}),
"LongTextBench": (setup_long_text_bench_dataset, "prompt_with_auxiliaries_collate", {}),
"GEditBench": (setup_gedit_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIG": (setup_oneig_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGAnimeStylization": (
setup_oneig_anime_stylization_dataset,
"prompt_with_auxiliaries_collate",
{},
),
"OneIGGeneralObject": (setup_oneig_general_object_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGKnowledgeReasoning": (
setup_oneig_knowledge_reasoning_dataset,
"prompt_with_auxiliaries_collate",
{},
),
"OneIGMultilingualism": (setup_oneig_multilingualism_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGPortrait": (setup_oneig_portrait_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGTextRendering": (setup_oneig_text_rendering_dataset, "prompt_with_auxiliaries_collate", {}),
"DPG": (setup_dpg_dataset, "prompt_with_auxiliaries_collate", {}),
"TinyIMDB": (setup_tiny_imdb_dataset, "text_generation_collate", {}),
"VBench": (setup_vbench_dataset, "prompt_with_auxiliaries_collate", {}),
Expand Down
Loading
Loading