Skip to content

fix(mlx-vlm): pin upstream to v0.4.4 to unblock CUDA builds#9568

Merged
mudler merged 1 commit intomasterfrom
fix/mlx-vlm-pin-v0.4.4
Apr 25, 2026
Merged

fix(mlx-vlm): pin upstream to v0.4.4 to unblock CUDA builds#9568
mudler merged 1 commit intomasterfrom
fix/mlx-vlm-pin-v0.4.4

Conversation

@mudler
Copy link
Copy Markdown
Owner

@mudler mudler commented Apr 25, 2026

Blaizzy/mlx-vlm git HEAD bumped its constraint to mlx>=0.31.2, but mlx-cuda-12 and mlx-cuda-13 are only published up to 0.31.1 on PyPI. Since mlx[cudaXX]==0.31.2 forces a sibling wheel that doesn't exist, pip backtracks through every older mlx[cudaXX], none of which satisfy mlx>=0.31.2, producing ResolutionImpossible.

Pin all variants to the v0.4.4 tag (mlx>=0.30.0), which resolves cleanly against mlx[cuda13]==0.31.1. cpu/mps weren't broken yet but are pinned for consistency.

Assisted-by: Claude:claude-opus-4-7

Blaizzy/mlx-vlm git HEAD bumped its constraint to mlx>=0.31.2, but
mlx-cuda-12 and mlx-cuda-13 are only published up to 0.31.1 on PyPI.
Since mlx[cudaXX]==0.31.2 forces a sibling wheel that doesn't exist,
pip backtracks through every older mlx[cudaXX], none of which satisfy
mlx>=0.31.2, producing ResolutionImpossible.

Pin all variants to the v0.4.4 tag (mlx>=0.30.0), which resolves
cleanly against mlx[cuda13]==0.31.1. cpu/mps weren't broken yet but
are pinned for consistency.

Assisted-by: Claude:claude-opus-4-7
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
@mudler mudler merged commit d733c9c into master Apr 25, 2026
50 checks passed
@mudler mudler deleted the fix/mlx-vlm-pin-v0.4.4 branch April 25, 2026 20:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant