Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 21 additions & 24 deletions gallery/index.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -743,7 +743,6 @@
- https://huggingface.co/mradermacher/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-heretic-i1-GGUF
tags:
- default
- default
overrides:
parameters:
model: llama-cpp/models/Qwen3.-27B-Claude-4.6-Opus-Reasoning-Distilled-heretic.i1-Q4_K_M.gguf
Expand Down Expand Up @@ -1546,7 +1545,7 @@
- "offload_to_cpu:false"
- "offload_dit_to_cpu:false"
- "init_lm:true"
- "lm_model_path:acestep-5Hz-lm-0.6B" # or acestep-5Hz-lm-4B

Check warning on line 1548 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

1548:45 [comments] too few spaces before comment: expected 2
- "lm_backend:pt"
- "temperature:0.85"
- "top_p:0.9"
Expand Down Expand Up @@ -1915,7 +1914,7 @@
Qwen3-TTS is a high-quality text-to-speech model supporting custom voice, voice design, and voice cloning.
tags:
- text-to-speech
- TTS
- tts
license: apache-2.0
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
name: "qwen3-tts-1.7b-custom-voice"
Expand All @@ -1925,7 +1924,7 @@
known_usecases:
- tts
tts:
voice: Aiden # Available speakers: Vivian, Serena, Uncle_Fu, Dylan, Eric, Ryan, Aiden, Ono_Anna, Sohee

Check warning on line 1927 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

1927:20 [comments] too few spaces before comment: expected 2
parameters:
model: Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice
- !!merge <<: *qwen-tts
Expand All @@ -1937,7 +1936,7 @@
known_usecases:
- tts
tts:
voice: Aiden # Available speakers: Vivian, Serena, Uncle_Fu, Dylan, Eric, Ryan, Aiden, Ono_Anna, Sohee

Check warning on line 1939 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

1939:20 [comments] too few spaces before comment: expected 2
parameters:
model: Qwen/Qwen3-TTS-12Hz-0.6B-CustomVoice
- &fish-speech
Expand All @@ -1947,7 +1946,7 @@
Fish Speech S2-Pro is a high-quality text-to-speech model supporting voice cloning via reference audio. Uses a two-stage pipeline: text to semantic tokens (LLaMA-based) then semantic to audio (DAC decoder).
tags:
- text-to-speech
- TTS
- tts
- voice-cloning
license: apache-2.0
icon: https://huggingface.co/fishaudio/s2-pro/resolve/main/overview.png
Expand All @@ -1966,7 +1965,7 @@
Qwen3-ASR is an automatic speech recognition model supporting multiple languages and batch inference.
tags:
- speech-recognition
- ASR
- asr
license: apache-2.0
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
name: "qwen3-asr-1.7b"
Expand Down Expand Up @@ -2575,7 +2574,7 @@
license: mit
tags:
- text-to-speech
- TTS
- tts
name: "vibevoice"
urls:
- https://github.com/microsoft/VibeVoice
Expand Down Expand Up @@ -2609,7 +2608,7 @@
license: mit
tags:
- text-to-speech
- TTS
- tts
name: "pocket-tts"
urls:
- https://github.com/kyutai-labs/pocket-tts
Expand Down Expand Up @@ -3057,8 +3056,8 @@
license: apache-2.0
tags:
- gguf
- GPU
- CPU
- gpu
- cpu
- text-to-text
- jamba
- mamba
Expand All @@ -3082,8 +3081,8 @@
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/639bcaa2445b133a4e942436/CEW-OjXkRkDNmTxSu8Egh.png
tags:
- gguf
- GPU
- CPU
- gpu
- cpu
- text-to-text
urls:
- https://huggingface.co/ibm-granite/granite-4.0-h-small
Expand Down Expand Up @@ -3145,8 +3144,8 @@
license: apache-2.0
tags:
- gguf
- GPU
- CPU
- gpu
- cpu
- text-to-text
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/64f187a2cc1c03340ac30498/TYYUxK8xD1AxExFMWqbZD.png
urls:
Expand All @@ -3169,8 +3168,8 @@
license: mit
tags:
- gguf
- GPU
- CPU
- gpu
- cpu
- text-to-text
icon: https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/9Bnn2AnIjfQFWBGkhDNmI.png
name: "aurore-reveil_koto-small-7b-it"
Expand All @@ -3197,8 +3196,8 @@
tags:
- multimodal
- gguf
- GPU
- Cpu
- gpu
- cpu
- image-to-text
- text-to-text
description: |
Expand Down Expand Up @@ -3819,7 +3818,6 @@
- gguf
- gpu
- cpu
- gguf
- openai
icon: https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg
urls:
Expand Down Expand Up @@ -4005,7 +4003,6 @@
tags:
- gguf
- gpu
- gpu
- text-generation
description: |
AFM-4.5B is a 4.5 billion parameter instruction-tuned model developed by Arcee.ai, designed for enterprise-grade performance across diverse deployment environments from cloud to edge. The base model was trained on a dataset of 8 trillion tokens, comprising 6.5 trillion tokens of general pretraining data followed by 1.5 trillion tokens of midtraining data with enhanced focus on mathematical reasoning and code generation. Following pretraining, the model underwent supervised fine-tuning on high-quality instruction datasets. The instruction-tuned model was further refined through reinforcement learning on verifiable rewards as well as for human preference. We use a modified version of TorchTitan for pretraining, Axolotl for supervised fine-tuning, and a modified version of Verifiers for reinforcement learning.
Expand Down Expand Up @@ -6725,7 +6722,7 @@
- gemma3
- gemma-3
overrides:
#mmproj: gemma-3-27b-it-mmproj-f16.gguf

Check warning on line 6725 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

6725:6 [comments] missing starting space in comment
parameters:
model: gemma-3-27b-it-Q4_K_M.gguf
files:
Expand All @@ -6743,7 +6740,7 @@
description: |
google/gemma-3-12b-it is an open-source, state-of-the-art, lightweight, multimodal model built from the same research and technology used to create the Gemini models. It is capable of handling text and image input and generating text output. It has a large context window of 128K tokens and supports over 140 languages. The 12B variant has been fine-tuned using the instruction-tuning approach. Gemma 3 models are suitable for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes them deployable in environments with limited resources such as laptops, desktops, or your own cloud infrastructure.
overrides:
#mmproj: gemma-3-12b-it-mmproj-f16.gguf

Check warning on line 6743 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

6743:6 [comments] missing starting space in comment
parameters:
model: gemma-3-12b-it-Q4_K_M.gguf
files:
Expand All @@ -6761,7 +6758,7 @@
description: |
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. Gemma-3-4b-it is a 4 billion parameter model.
overrides:
#mmproj: gemma-3-4b-it-mmproj-f16.gguf

Check warning on line 6761 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

6761:6 [comments] missing starting space in comment
parameters:
model: gemma-3-4b-it-Q4_K_M.gguf
files:
Expand Down Expand Up @@ -9112,7 +9109,7 @@
description: |
Granite-Embedding-107M-Multilingual is a 107M parameter dense biencoder embedding model from the Granite Embeddings suite that can be used to generate high quality text embeddings. This model produces embedding vectors of size 384 and is trained using a combination of open source relevance-pair datasets with permissive, enterprise-friendly license, and IBM collected and generated datasets. This model is developed using contrastive finetuning, knowledge distillation and model merging for improved performance.
tags:
- embeddings
- embedding
overrides:
backend: llama-cpp
embeddings: true
Expand All @@ -9130,7 +9127,7 @@
description: |
Granite-Embedding-125m-English is a 125M parameter dense biencoder embedding model from the Granite Embeddings suite that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets. While maintaining competitive scores on academic benchmarks such as BEIR, this model also performs well on many enterprise use cases. This model is developed using retrieval oriented pretraining, contrastive finetuning and knowledge distillation.
tags:
- embeddings
- embedding
overrides:
embeddings: true
parameters:
Expand All @@ -9147,7 +9144,7 @@
description: |
EmbeddingGemma 300M is a lightweight, high-quality embedding model from Google, based on the Gemma architecture. It produces 1024-dimensional embeddings optimized for retrieval and semantic similarity tasks. This GGUF version uses QAT (Quantization-Aware Training) Q8_0 quantization for efficient inference.
tags:
- embeddings
- embedding
overrides:
backend: llama-cpp
embeddings: true
Expand Down Expand Up @@ -15923,7 +15920,7 @@
tags:
- gpu
- cpu
- embeddings
- embedding
- python
name: "all-MiniLM-L6-v2"
url: "github:mudler/LocalAI/gallery/sentencetransformers.yaml@master"
Expand Down Expand Up @@ -16776,7 +16773,7 @@
description: |
llama3.2 embeddings model. Using as drop-in replacement for bert-embeddings
tags:
- embeddings
- embedding
overrides:
embeddings: true
parameters:
Expand Down Expand Up @@ -18499,7 +18496,7 @@
description: |
Resizable Production Embeddings with Matryoshka Representation Learning
tags:
- embeddings
- embedding
overrides:
embeddings: true
parameters:
Expand Down
Loading