Flash-MoE inference for large GGUF Mixture-of-Experts models on Apple Silicon, using llama.cpp.
This repo stays close to upstream
ggml-org/llama.cpp, but adds sidecar-backed routed expert loading, streamed slot-bank execution, trace capture, oracle replay, and bank-modeling tools for Flash-MoE workflows.Flash-MoE-specific guide: tools/flashmoe-sidecar/README.md
Flash-MoE bank-modeling workflow: docs/moe-bank-modeling-workflow.md
Native slot-bank model-porting guide: docs/native-slot-bank-porting.md
Per-model extract + run recipes live in tools/flashmoe-sidecar/README.md:
| Model | Arch | Status | Extract | Run |
|---|---|---|---|---|
| Qwen3.5-35B-A3B | qwen3moe | Stable (anchor) | Extract | Run |
| Qwen3.5-397B-A17B | qwen35moe (linear-attn) | Stable (slot-bank required) | Extract | Run |
| Gemma4-26B-A4B | gemma4 | Stable | Extract | Run |
| Kimi K2 / K2.5 | deepseek2 (MLA) | Experimental | Extract | Run |
| GLM-5.1 (256×22B) | glm-dsa (MLA + DSA indexer) | Experimental | Extract | Run |
Build instructions are below; once built, jump to the extract + run recipe for your model.
Qwen3.5GGUF MoE is the current anchor path for bring-up and regression work. Stable.Gemma4-26B-A4BGGUF MoE: stable. Nativen_expert_used = 8. Sensitive to slot-bank size on smaller-memory machines — start with--moe-slot-bank 8or16.Kimi-K2andKimi-K2.5GGUF support is experimental: sidecar extraction, slot-bank runtime, trace capture, and bank modeling work, but quality and performance are still being tuned. Kimi currently requires-ub 1for correct output. The-ngl 99dense GPU path produces degraded output on some runs due to a Metal compute issue being investigated.GLM-5.1(glm-dsa, 256×22B MoE) is experimental: 79 layers, MLA attention withq_lora_rank=2048, DeepSeek-V3-style sigmoid routing (K=8 of 256), shared expert, and a per-layer DSA sparse-attention indexer. Slot-bank streaming works; the DSA indexer adds per-layer dense matmul overhead K2.5 doesn't have, and IQ1_M / IQ2_XXSmul_mat_idMetal kernels are not yet on the hot path. Realistic decode on M5 Max 128 GB is currently ~3.9 tok/s with--moe-topk 4 --moe-prefetch-temporal --moe-slot-bank 64. Shares the DeepSeek2 GPU-bank fallback path.- The recommended build uses
-DLLAMA_FLASH_MOE_GPU_BANK=ON(the default). In slot-bank mode, routed experts are not loaded into GPU memory — they stream from SSD. Use-ngl 99to offload dense/shared weights to GPU; the fitter clamps dense/shared placement against the routed slot-bank budget. - For Kimi/DeepSeek2/GLM-5.1, GPU-bank placement of routed experts is enabled by default in GPU-bank builds. Set
LLAMA_FLASH_MOE_DISABLE_UNSAFE_DEEPSEEK2_GPU_BANK=1to force the host-backed slot-bank path if you hit hangs or memory pressure.
Flash-MoE is not mainly about SSD reads. It is about changing the boundary between dense execution, expert storage, expert selection, and expert consumption.
The dense path should stay inside the backend that is already good at it. The sparse path should be reshaped around a stable bank or slot-bank plus ids-as-data.
By "banking" we mean this: instead of treating every routed expert as a fresh tensor object that has to be materialized on demand, the runtime keeps a small resident execution surface per layer, made of stable slots. A routed expert_id is first resolved to a resident slot_id; if the expert is already there, execution is just an indexed hit, and if not, the runtime loads the expert bytes and commits them into a victim slot before use. The important part is that the execution shape stays stable while expert identity changes, which is much cheaper than rebuilding a tiny K-expert bank every token.
A stable bank with changing ids is fast. Per-token K-expert materialization is slow.
The expensive part is not dynamic routing by itself. The expensive part is forcing the runtime to keep rebuilding selected experts in the hot path. Once that boundary is moved, a streamed slot-bank becomes the first realistic external-expert shape. Miss path and hit path become different problems. Oracle ceilings become mandatory, because they tell you whether the next bottleneck is consume, commit, or prefetch.
This is the workflow to reuse across backends:
- Measure the normal resident baseline.
- Build a resident packed-bank path.
- Prove that changing ids is cheap on that good path.
- Add one intentionally bad K-materialization diagnostic.
- Build a real streamed slot-bank.
- Measure locality before designing cache policy.
- Build oracle all-hit replay and one-step oracle prefetch replay.
- Decide whether the next work belongs in the hit path or the miss path.
- Only then move more of the boundary into native consume if backend-level slot-bank work stalls.
Stable bank or slot plus ids is the first shape to try. Per-token K-expert materialization is useful as a diagnostic, but usually the wrong product shape. Hit path and miss path are different bottlenecks. Early slot-bank numbers are rungs, not ceilings. And native code only matters if the execution boundary really moves.
--moe-sidecarto attach a routed-expert sidecar manifest or bank directory--moe-mode stock|resident|resident-bank|slot-bank|oracle-all-hit|oracle-prefetch--moe-slot-bankfor streamed resident slot capacity per routed MoE layer--moe-topkfor an experimental reduction-only routed expert override--moe-prefetch-temporalfor runtime one-step temporal prefetch on top ofslot-bank--moe-shared-onlyfor a shared-expert-only diagnostic path that bypasses routed experts at graph build time--moe-trace-harnessfor long non-interactive raw trace runs fromllama-cli--moe-traceand--moe-verify-sidecarfor replay and validation workflows- sidecar extract / inspect / verify tooling under
tools/flashmoe-sidecar/
git clone https://github.com/Anemll/anemll-flash-llama.cpp.git
cd anemll-flash-llama.cpp
# Metal (Apple Silicon) — recommended
cmake -S . -B build \
-DGGML_METAL=ON \
-DCMAKE_BUILD_TYPE=Release \
-DLLAMA_FLASH_MOE_GPU_BANK=ON
cmake --build build --config Release -j"$(sysctl -n hw.ncpu)" \
--target llama-cli llama-bench llama-perplexity
# CUDA (NVIDIA)
cmake -S . -B build \
-DGGML_CUDA=ON \
-DCMAKE_BUILD_TYPE=Release \
-DLLAMA_FLASH_MOE_GPU_BANK=ON
cmake --build build --config Release -j"$(nproc)" \
--target llama-cli llama-bench
# CPU-only fallback
cmake -S . -B build \
-DCMAKE_BUILD_TYPE=Release
cmake --build build --config Release -j"$(nproc)" \
--target llama-cli llama-benchTo disable GPU-bank placement at compile time (also controllable at runtime via LLAMA_FLASH_MOE_DISABLE_GPU_BANK=1):
cmake -S . -B build \
-DGGML_METAL=ON \
-DCMAKE_BUILD_TYPE=Release \
-DLLAMA_FLASH_MOE_GPU_BANK=OFFThe slot-bank holds N recently-used experts per layer in memory. Larger banks mean higher hit rates but more memory.
Qwen3.5-35B-A3B (256 experts/layer, ~920 KB per expert slot):
| Bank | Memory | Good for |
|---|---|---|
| 8 | 0.3 GiB | 8 GB machines — leaves room for dense weights |
| 16 | 0.6 GiB | 8-16 GB — good balance for small machines |
| 32 | 1.1 GiB | 16-32 GB |
| 64 | 2.2 GiB | 32-64 GB |
| 128 | 4.5 GiB | 64+ GB — diminishing returns above this |
Rule of thumb: bank should be 5-15% of available RAM. On machines where the GGUF fits entirely in memory, just use --moe-mode stock (no sidecar needed).
Quick reference by machine:
| Machine | RAM | Recommended | Notes |
|---|---|---|---|
| M1/M2 8 GB | 8 GB | --moe-slot-bank 16 -ngl 0 |
Dense on CPU, experts streamed |
| M1/M2 16 GB | 16 GB | --moe-slot-bank 32 -ngl 99 |
Dense on GPU if it fits |
| M3/M4 Pro 36 GB | 36 GB | --moe-slot-bank 64 -ngl 99 |
35B fits resident — use stock |
| M4/M5 Max 128 GB | 128 GB | --moe-slot-bank 128 -ngl 99 |
35B: use stock. 397B: slot-bank |
A Mixture-of-Experts model has two kinds of weights:
-
Dense weights — attention, norms, embedding, LM head, shared experts. These are small, used on every token, and stay resident in memory. In a GGUF file these are the non-
_expstensors. -
Routed expert weights — gate, up, and down projections for each of the N experts per layer. These are large (often 90%+ of total model size) but only K of N are activated per token. In a GGUF these are
ffn_gate_exps,ffn_up_exps,ffn_down_exps.
In stock mode, llama.cpp loads everything into one memory space — dense and experts together. This works when the model fits in RAM.
In slot-bank mode, the runtime splits execution:
- The GGUF file provides dense weights (loaded normally via mmap)
- A sidecar directory provides routed expert weights as separate per-layer binary files, streamed from SSD on demand via
pread()
GGUF file (dense) Sidecar directory (experts)
┌─────────────────────┐ ┌─────────────────────────┐
│ attention Q/K/V/O │ │ layer_000.bin (3 tensors: gate + up + down)
│ norms, embedding │ │ layer_001.bin
│ LM head │ │ ...
│ shared experts │ │ layer_059.bin
│ routing gates │ │ manifest.json
└─────────────────────┘ └─────────────────────────┘
~5-30 GB ~9 GB (35B) / ~217 GB (K2.5)
always in memory streamed from SSD per token
The sidecar is created by extracting routed expert tensors from the GGUF into separate files. The manifest.json maps tensor names to byte offsets within each layer file, preserving the original quantization format. At runtime, only the K active experts per layer are read from disk — not all N.
This split is what makes it possible to run models whose expert weights exceed available RAM. The dense weights stay resident while the slot-bank runtime manages a small cache of recently-used experts, evicting and loading as routing decisions change.
# 1. Download model
huggingface-cli download unsloth/Qwen3.5-35B-A3B-GGUF \
--include "Qwen3.5-35B-A3B-UD-IQ2_M.gguf" \
--local-dir ~/Models
# 2. Inspect routed expert tensors
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py inspect \
--model ~/Models/Qwen3.5-35B-A3B-UD-IQ2_M.gguf
# → 40 layers, 120 tensors, 9.0 GB (ffn_gate_exps + ffn_up_exps + ffn_down_exps)
# 3. Extract sidecar (all routed experts)
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py extract \
--model ~/Models/Qwen3.5-35B-A3B-UD-IQ2_M.gguf \
--out-dir ~/Models/flash/qwen35 \
--force
# → wrote 120 tensors across 40 layers (9.0 GB)
# → manifest: ~/Models/flash/qwen35/manifest.json
# 4. Verify byte-level integrity
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py verify \
--model ~/Models/Qwen3.5-35B-A3B-UD-IQ2_M.gguf \
--sidecar ~/Models/flash/qwen35
# → verified 120 Flash-MoE sidecar entries against 1 GGUF file(s)
# 5. Run (adjust --moe-slot-bank and -ngl to your machine, see table above)
# 128 GB: --moe-slot-bank 128 -ngl 99
# 8 GB: --moe-slot-bank 16 -ngl 0
build/bin/llama-cli \
-m ~/Models/Qwen3.5-35B-A3B-UD-IQ2_M.gguf \
--moe-mode slot-bank --moe-sidecar ~/Models/flash/qwen35 \
--moe-slot-bank 16 --moe-topk 4 --moe-prefetch-temporal \
--moe-trace-harness --no-warmup \
-ub 1 -b 64 -ngl 99 -c 256 --seed 0 --temp 0 \
-p "What is Apple Neural Engine" -n 120-ub 1: required for correct MoE prefill on GPU — multi-token ubatch produces incorrect output.
-ngl 99: offload dense weights to GPU (use when dense weights fit in VRAM).
-ngl 0: keep everything on CPU (8 GB machines, or when GPU memory is tight).
--moe-slot-bank N: see the sizing table above — 16 for 8 GB, 128 for 128 GB.
# 1. Download model (5 shards, ~250 GB total)
huggingface-cli download moonshotai/Kimi-K2.5-GGUF \
--include "Kimi-K2.5-UD-TQ1_0-*.gguf" \
--local-dir ~/Models/Kimi
# 2. Inspect — pass the first shard, tool auto-discovers all 5
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py inspect \
--model ~/Models/Kimi/Kimi-K2.5-UD-TQ1_0-00001-of-00005.gguf
# → 60 layers, 180 tensors, 217 GB (deepseek2 arch, 256 experts/layer)
# 3. Extract sidecar (all routed experts, ~20 min)
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py extract \
--model ~/Models/Kimi/Kimi-K2.5-UD-TQ1_0-00001-of-00005.gguf \
--out-dir ~/Models/flash/Kimi-K2.5-sidecar \
--force
# → wrote 180 tensors across 60 layers (217 GB)
# 4. Verify
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py verify \
--model ~/Models/Kimi/Kimi-K2.5-UD-TQ1_0-00001-of-00005.gguf \
--sidecar ~/Models/flash/Kimi-K2.5-sidecar
# → verified 180 Flash-MoE sidecar entries against 5 GGUF file(s)
# 5. Run (adjust --moe-slot-bank and -ngl to your machine, see table above)
# 128 GB: --moe-slot-bank 128 -ngl 99
# 8 GB: --moe-slot-bank 16 -ngl 0
build/bin/llama-cli \
-m ~/Models/Kimi/Kimi-K2.5-UD-TQ1_0-00001-of-00005.gguf \
--moe-mode slot-bank --moe-sidecar ~/Models/flash/Kimi-K2.5-sidecar \
--moe-slot-bank 64 --moe-topk 4 --moe-prefetch-temporal \
--moe-trace-harness --no-warmup \
-ub 1 -b 64 -ngl 0 -c 256 --seed 0 --temp 0 \
-p "Compare Apple Neural Engine vs Google's TPU" -n 100In slot-bank mode, routed expert weights are not loaded into GPU memory at startup.
Dense and shared weights are offloaded to GPU via -ngl 99 as usual.
Routed experts are streamed from SSD on demand by the slot-bank runtime —
only the K active experts per token are read, and recently-used experts
are kept in a host-memory cache (sized by --moe-slot-bank).
For debugging slot-bank issues, enable backend tracing, a routing trace file, and verbose logging:
env \
LLAMA_FLASH_MOE_BACKEND_TRACE=1 \
build/bin/llama-cli \
--color off --simple-io \
-m ~/Models/Kimi/Kimi-K2.5-UD-TQ1_0-00001-of-00005.gguf \
--moe-mode slot-bank --moe-sidecar ~/Models/flash/Kimi-K2.5-sidecar \
--moe-slot-bank 64 --moe-topk 4 --moe-prefetch-temporal \
--moe-trace /tmp/flashmoe.trace.jsonl \
--log-file /tmp/flashmoe.log \
--log-prefix --log-timestamps -v \
--no-warmup \
-ub 1 -b 64 -ngl 99 -c 4096 --seed 0 --temp 0 \
-p "What is Apple Neural Engine?" -n 128Key flags:
LLAMA_FLASH_MOE_BACKEND_TRACE=1— logs backend-level slot install/eviction eventsLLAMA_FLASH_MOE_DEEP_LOG=1— even more verbose internal logging--moe-trace /tmp/flashmoe.trace.jsonl— writes per-token routing decisions (layer, expert ids, slot hits/misses)--log-file /tmp/flashmoe.log— redirects all log output to a file--log-prefix --log-timestamps -v— adds timestamps and verbose detail to log lines
Output files:
/tmp/flashmoe.trace.jsonl— one JSON line per token with routing decisions per layer/tmp/flashmoe.log— full runtime log with timestamps
After extraction:
~/Models/flash/qwen35/
manifest.json # tensor map: names, shapes, quant types, offsets
layer_000.bin # all routed expert tensors for layer 0
layer_001.bin # ...
...
layer_039.bin # layer 39
~/Models/flash/Kimi-K2.5-sidecar/
manifest.json
layer_001.bin # Kimi starts routed MoE at layer 1
...
layer_060.bin
# Extract only layers 0-4 (quick test)
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py extract \
--model ~/Models/Qwen3.5-35B-A3B-UD-IQ2_M.gguf \
--out-dir /tmp/qwen35-partial \
--layers 0-4 --force
# Extract only gate/up experts (skip down_proj)
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py extract \
--model ~/Models/Qwen3.5-35B-A3B-UD-IQ2_M.gguf \
--out-dir /tmp/qwen35-gateup \
--families ffn_gate_exps,ffn_up_exps --force
# Include shared experts alongside routed
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py extract \
--model ~/Models/Kimi/Kimi-K2.5-UD-TQ1_0-00001-of-00005.gguf \
--out-dir ~/Models/flash/Kimi-K2.5-with-shared \
--include-shared --forceCurrent Flash-MoE slot-bank examples use -ub 1. Historical benchmark tables below are kept as recorded experiments; if a benchmark heading or table reflects a different ubatch value, treat that as measurement context rather than the current recommended setting.
Qwen3.5-35B-A3B-UD-IQ2_M (256 experts/layer, k→4 via --moe-topk, 10.60 GiB, 2.63 BPW)
Recommended config:
llama-cli -m Qwen3.5-35B-A3B-UD-IQ2_M.gguf \
--moe-mode slot-bank --moe-sidecar ~/Models/flash/qwen35 \
--moe-slot-bank 16 --moe-topk 4 --moe-prefetch-temporal \
--moe-trace-harness --no-warmup \
-ub 1 -b 64 -ngl 99 -c 256 --seed 0 --temp 0 \
-p “What is Apple Neural Engine” -n 120| Bank | Decode tok/s | Prompt tok/s | Hit rate | Misses/call | I/O GiB | pread ops |
|---|---|---|---|---|---|---|
| 16 | 46.9 | 79.2 | 58.8% | 1.68 | 7.21 | 24,642 |
| 32 | 51.9 | 75.5 | 72.8% | 1.11 | 4.77 | 16,299 |
| 64 | 53.2 | 61.5 | 81.8% | 0.74 | 3.18 | 10,869 |
| 128 | 53.0 | 75.5 | 84.5% | 0.64 | 2.72 | 9,297 |
| 256 | 55.2 | 76.0 | 84.5% | 0.63 | 2.72 | 9,291 |
bank=64-128 is the sweet spot: 53 t/s decode at 82-85% hit rate. bank=256 squeezes out 2 more t/s but doubles resident memory. Temporal prefetch: near-100% hit after warm-up.
| topk | Decode tok/s | Prompt tok/s | Hit % | I/O GiB | pread ops |
|---|---|---|---|---|---|
| 2 | 52.0 | 59.5 | 77.9% | 1.94 | 6,630 |
| 4 | 52.9 | 71.5 | 84.5% | 2.72 | 9,297 |
| 8 (default) | 42.8 | 31.2 | 86.3% | 4.79 | 16,389 |
--moe-topk 4 is the sweet spot — 24% faster than default k=8, with better prompt throughput. k=2 saves I/O but slightly lower hit rate.
| ub | Decode tok/s | Prompt tok/s | Notes |
|---|---|---|---|
| 1 | 55.0 | 45.2 | Best decode, slow prompt |
| 2 | 56.7 | 61.2 | Best decode |
| 4 | 54.8 | 74.3 | Best overall balance |
| 8 | 54.1 | 75.3 | |
| 16 | 52.4 | 75.6 | |
| 32 | 50.8 | 73.7 |
-ub 1 is required for correct MoE prefill on GPU. Multi-token ubatch (-ub 2 and above) may produce incorrect output with MoE models due to a prefill batching issue.
topk resolve: 0.17 ms (routing + softmax + top-k selection)
slot resolve: 0.96 ms (LRU lookup + victim selection)
pread install: 265.55 ms (9,297 pread ops, 2.72 GiB total)
source I/O: 149.01 ms (pread from sidecar layer files)
GPU upload: 113.73 ms (copy to Metal buffer)
slot-write: 0.11 ms
trace: 0.09 ms
The GPU upload cost (114 ms) is new vs the old CPU-only path — but it's worth it because GPU compute on the dense path is 3-4x faster.
| Mode | Decode tok/s | Prefill tok/s | Notes |
|---|---|---|---|
stock (all resident, -ngl 99) |
109.0 | 3,070 | Ceiling — all experts in GPU |
slot-bank 128 (sidecar, -ngl 99) |
53.0 | 75.5 | GPU-bank + pread streaming |
Slot-bank reaches 49% of stock. The gap is pread I/O + GPU upload for the 16% of experts that miss the bank each token.
Slot-bank decode is 49% of stock — the gap is pread I/O + GPU upload for the 16% of experts that miss the bank each token.
| Batch | tok/s |
|---|---|
| 1 | 97 |
| 128 | 1,955 |
| 256 | 2,567 |
| 512 | 2,997 |
| 1024 | 3,021 |
| 2048 | 3,000 |
| Tokens | tok/s |
|---|---|
| 32 | 108.3 |
| 64 | 108.9 |
| 128 | 109.1 |
| 256 | 109.3 |
| 512 | 105.0 |
| Quant | PPL | BPW | Size |
|---|---|---|---|
| IQ2_M | ~9.97 | 2.63 | 10.60 GiB |
Context 4352, stride 512, 4 chunks. IQ2_M is a streaming/speed test quant — quality targets require Q4_K_M or higher.
Trace-harness mode produces coherent, detailed text (tested: RISC vs CISC architecture explanation, Apple Neural Engine description). llama-simple produces degenerate output due to sampler chain differences.
| Tool | Test | Result |
|---|---|---|
flashmoe_sidecar.py inspect |
40 layers, 120 expert tensors | PASS |
flashmoe_sidecar.py extract |
40 layers → 120 tensors (9.0 GB) | PASS |
flashmoe_sidecar.py verify |
metadata + byte-level check | PASS (9/9) |
Kimi-K2.5-UD-TQ1_0 (256 experts/layer, 60 layers, 217 GB sidecar — real SSD streaming)
llama-cli -m Kimi-K2.5-UD-TQ1_0-00001-of-00005.gguf \
--moe-mode slot-bank --moe-sidecar ~/Models/flash/Kimi-K2.5-sidecar \
--moe-slot-bank 64 --moe-topk 4 --moe-prefetch-temporal \
--moe-trace-harness --no-warmup \
-ub 1 -b 64 -ngl 0 -c 256 --seed 0 --temp 0 \
-p "What is Apple Neural Engine" -n 100| Bank | Decode tok/s | Hit rate | Misses/call | I/O GiB | pread ops |
|---|---|---|---|---|---|
| 16 | 3.2 | 55.6% | 1.82 | 107.0 | 33,435 |
| 32 | 3.2 | 64.1% | 1.47 | 86.2 | 27,012 |
| 64 | 3.3 | 73.7% | 1.08 | 62.8 | 19,755 |
| 128 | 3.2 | 76.4% | 0.97 | 56.2 | 17,781 |
| 256 | 3.2 | 76.4% | 0.97 | 56.2 | 17,781 |
bank=64 is the best balance for Kimi K2.5. Hit rate saturates at bank=128 (76.4%).
Temporal prefetch: 100% hit (10 cold misses / 6,120 calls). Prefetch I/O: 0.11 GiB — near-zero overhead.
| topk | Decode tok/s | Hit % | I/O GiB | pread ops |
|---|---|---|---|---|
| 2 | 4.7 | 67.6% | 38.3 | 12,192 |
| 4 | 3.3 | 73.7% | 62.8 | 19,755 |
| 8 (default) | 2.2 | 67.2% | 157.9 | 49,266 |
--moe-topk 4 halves I/O vs default k=8 while maintaining generation quality. --moe-topk 2 is 43% faster but may degrade output for complex tasks.
Coherent, factually accurate output at k=4:
Apple Neural Engine (ANE) is a specialized hardware component that is designed to accelerate machine learning tasks on Apple devices. It is integrated into Apple's A-series and M-series processors, which power iPhones, iPads, and Macs. The ANE is specifically designed to handle machine learning tasks such as image recognition, natural language processing, and augmented reality.
Model: Kimi-K2.5, 60 layers, 256 experts/layer, sidecar 217 GB
Experts: 3.28 MB each (TQ1_0)
Per-token: ~k×60 = 240 expert loads on miss, ~3.3 MB × 240 = ~790 MB/tok worst case
topk resolve: 1.89 ms
slot resolve: 2.40 ms
pread install: 13,852 ms total (19,755 ops, 62.8 GiB)
per pread: 0.70 ms avg (3.28 MB per read, ~4.7 GB/s effective)
prefetch: 22.1 ms total (100% hit after warm-up)
SSD throughput: ~4.7 GB/s effective (Apple Fabric, 3.28 MB sequential pread). Bottleneck is pure I/O — the 217 GB sidecar exceeds the 128 GB page cache, so most experts are cold reads from NVMe.
Kimi currently uses -ngl 0 as the safe fallback. The -ngl 99 dense GPU path is faster (4.5 t/s) but produces degraded output on some runs due to a Metal compute issue with the DeepSeek2 architecture. Use -ngl 99 for speed testing, -ngl 0 for quality.
The 397B model uses the same slot-bank workflow as the 35B — just with larger sidecars and more expert layers (60 vs 40).
# Dense GGUF (IQ2_M quantization)
huggingface-cli download unsloth/Qwen3.5-397B-A17B-GGUF \
--include "Qwen3.5-397B-A17B-UD-IQ2_M-*.gguf" \
--local-dir ~/Models/Qwen3.5-397B
# Or use a different quantization — check unsloth for available optionspython3 tools/flashmoe-sidecar/flashmoe_sidecar.py extract \
--model ~/Models/Qwen3.5-397B/Qwen3.5-397B-A17B-UD-IQ2_M-00001-of-00005.gguf \
--out-dir ~/Models/flash/qwen397b \
--force
python3 tools/flashmoe-sidecar/flashmoe_sidecar.py verify \
--model ~/Models/Qwen3.5-397B/Qwen3.5-397B-A17B-UD-IQ2_M-00001-of-00005.gguf \
--sidecar ~/Models/flash/qwen397bbuild/bin/llama-cli \
-m ~/Models/Qwen3.5-397B/Qwen3.5-397B-A17B-UD-IQ2_M-00001-of-00005.gguf \
--moe-mode slot-bank --moe-sidecar ~/Models/flash/qwen397b \
--moe-slot-bank 128 --moe-topk 4 --moe-prefetch-temporal \
--moe-trace-harness --no-warmup \
-ub 1 -b 64 -ngl 99 -c 256 --seed 0 --temp 0 \
-p "What is Apple Neural Engine?" -n 200Key differences from 35B:
--moe-slot-bank 128— larger bank for 512 experts/layer (vs 256 for 35B)-ngl 99— dense weights (~30 GB) fit in GPU; routed experts (~200+ GB) stream from SSD-ub 1— required for correct MoE prefill on GPU- Sidecar is multi-shard — pass the first shard, the tool auto-discovers the rest
This is still llama.cpp.
The goal is a fork you can rebase from upstream while keeping the Flash-MoE patch surface small and explicit.
LLM inference in C/C++
- guide : using the new WebUI of llama.cpp
- guide : running gpt-oss with llama.cpp
- [FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗
- Support for the
gpt-ossmodel with native MXFP4 format has been added | PR | Collaboration with NVIDIA | Comment - Multimodal support arrived in
llama-server: #12898 | documentation - VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
- Hugging Face Inference Endpoints now support GGUF out of the box! ggml-org/llama.cpp#9669
- Hugging Face GGUF editor: discussion | tool
Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:
- Install
llama.cppusing brew, nix or winget - Run with Docker - see our Docker documentation
- Download pre-built binaries from the releases page
- Build from source by cloning this repository - check out our build guide
Once installed, you'll need a model to work with. Head to the Obtaining and quantizing models section to learn more.
Example command:
# Use a local model file
llama-cli -m my_model.gguf
# Or download and run a model directly from Hugging Face
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
# Launch OpenAI-compatible API server
llama-server -hf ggml-org/gemma-3-1b-it-GGUFThe main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide
range of hardware - locally and in the cloud.
- Plain C/C++ implementation without any dependencies
- Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
- AVX, AVX2, AVX512 and AMX support for x86 architectures
- RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V architectures
- 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
- Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
- Vulkan and SYCL backend support
- CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
The llama.cpp project is the main playground for developing new features for the ggml library.
Models
Typically finetunes of the base models below are supported as well.
Instructions for adding support for new models: HOWTO-add-model.md
- LLaMA 🦙
- LLaMA 2 🦙🦙
- LLaMA 3 🦙🦙🦙
- Mistral 7B
- Mixtral MoE
- DBRX
- Jamba
- Falcon
- Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2
- Vigogne (French)
- BERT
- Koala
- Baichuan 1 & 2 + derivations
- Aquila 1 & 2
- Starcoder models
- Refact
- MPT
- Bloom
- Yi models
- StableLM models
- Deepseek models
- Qwen models
- PLaMo-13B
- Phi models
- PhiMoE
- GPT-2
- Orion 14B
- InternLM2
- CodeShell
- Gemma
- Mamba
- Grok-1
- Xverse
- Command-R models
- SEA-LION
- GritLM-7B + GritLM-8x7B
- OLMo
- OLMo 2
- OLMoE
- Granite models
- GPT-NeoX + Pythia
- Snowflake-Arctic MoE
- Smaug
- Poro 34B
- Bitnet b1.58 models
- Flan T5
- Open Elm models
- ChatGLM3-6b + ChatGLM4-9b + GLMEdge-1.5b + GLMEdge-4b
- GLM-4-0414
- SmolLM
- EXAONE-3.0-7.8B-Instruct
- FalconMamba Models
- Jais
- Bielik-11B-v2.3
- RWKV-7
- RWKV-6
- QRWKV-6
- GigaChat-20B-A3B
- Trillion-7B-preview
- Ling models
- LFM2 models
- Hunyuan models
- BailingMoeV2 (Ring/Ling 2.0) models
Bindings
- Python: ddh0/easy-llama
- Python: abetlen/llama-cpp-python
- Go: go-skynet/go-llama.cpp
- Node.js: withcatai/node-llama-cpp
- JS/TS (llama.cpp server client): lgrammel/modelfusion
- JS/TS (Programmable Prompt Engine CLI): offline-ai/cli
- JavaScript/Wasm (works in browser): tangledgroup/llama-cpp-wasm
- Typescript/Wasm (nicer API, available on npm): ngxson/wllama
- Ruby: yoshoku/llama_cpp.rb
- Rust (more features): edgenai/llama_cpp-rs
- Rust (nicer API): mdrokz/rust-llama.cpp
- Rust (more direct bindings): utilityai/llama-cpp-rs
- Rust (automated build from crates.io): ShelbyJenkins/llm_client
- C#/.NET: SciSharp/LLamaSharp
- C#/VB.NET (more features - community license): LM-Kit.NET
- Scala 3: donderom/llm4s
- Clojure: phronmophobic/llama.clj
- React Native: mybigday/llama.rn
- Java: kherud/java-llama.cpp
- Java: QuasarByte/llama-cpp-jna
- Zig: deins/llama.cpp.zig
- Flutter/Dart: netdur/llama_cpp_dart
- Flutter: xuegao-tzx/Fllama
- PHP (API bindings and features built on top of llama.cpp): distantmagic/resonance (more info)
- Guile Scheme: guile_llama_cpp
- Swift srgtuszy/llama-cpp-swift
- Swift ShenghaiWang/SwiftLlama
- Delphi Embarcadero/llama-cpp-delphi
- Go (no CGo needed): hybridgroup/yzma
- Android: llama.android
UIs
(to have a project listed here, it should clearly state that it depends on llama.cpp)
- AI Sublime Text plugin (MIT)
- BonzAI App (proprietary)
- cztomsik/ava (MIT)
- Dot (GPL)
- eva (MIT)
- iohub/collama (Apache-2.0)
- janhq/jan (AGPL)
- johnbean393/Sidekick (MIT)
- KanTV (Apache-2.0)
- KodiBot (GPL)
- llama.vim (MIT)
- LARS (AGPL)
- Llama Assistant (GPL)
- LlamaLib (Apache-2.0)
- LLMFarm (MIT)
- LLMUnity (MIT)
- LMStudio (proprietary)
- LocalAI (MIT)
- LostRuins/koboldcpp (AGPL)
- MindMac (proprietary)
- MindWorkAI/AI-Studio (FSL-1.1-MIT)
- Mobile-Artificial-Intelligence/maid (MIT)
- Mozilla-Ocho/llamafile (Apache-2.0)
- nat/openplayground (MIT)
- nomic-ai/gpt4all (MIT)
- ollama/ollama (MIT)
- oobabooga/text-generation-webui (AGPL)
- PocketPal AI (MIT)
- psugihara/FreeChat (MIT)
- ptsochantaris/emeltal (MIT)
- pythops/tenere (AGPL)
- ramalama (MIT)
- semperai/amica (MIT)
- withcatai/catai (MIT)
- Autopen (GPL)
Tools
- akx/ggify – download PyTorch models from HuggingFace Hub and convert them to GGML
- akx/ollama-dl – download models from the Ollama library to be used directly with llama.cpp
- crashr/gppm – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
- gpustack/gguf-parser - review/check the GGUF file and estimate the memory usage
- Styled Lines (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
- unslothai/unsloth – 🦥 exports/saves fine-tuned and trained models to GGUF (Apache-2.0)
Infrastructure
- Paddler - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
- GPUStack - Manage GPU clusters for running LLMs
- llama_cpp_canister - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
- llama-swap - transparent proxy that adds automatic model switching with llama-server
- Kalavai - Crowdsource end to end LLM deployment at any scale
- llmaz - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
- LLMKube - Kubernetes operator for llama.cpp with multi-GPU and Apple Silicon Metal support"
Games
- Lucy's Labyrinth - A simple maze game where agents controlled by an AI model will try to trick you.
| Backend | Target devices |
|---|---|
| Metal | Apple Silicon |
| BLAS | All |
| BLIS | All |
| SYCL | Intel and Nvidia GPU |
| OpenVINO [In Progress] | Intel CPUs, GPUs, and NPUs |
| MUSA | Moore Threads GPU |
| CUDA | Nvidia GPU |
| HIP | AMD GPU |
| ZenDNN | AMD CPU |
| Vulkan | GPU |
| CANN | Ascend NPU |
| OpenCL | Adreno GPU |
| IBM zDNN | IBM Z & LinuxONE |
| WebGPU [In Progress] | All |
| RPC | All |
| Hexagon [In Progress] | Snapdragon |
| VirtGPU | VirtGPU APIR |
The Hugging Face platform hosts a number of LLMs compatible with llama.cpp:
You can either manually download the GGUF file or directly use any llama.cpp-compatible models from Hugging Face or other model hosting sites, such as ModelScope, by using this CLI argument: -hf <user>/<model>[:quant]. For example:
llama-cli -hf ggml-org/gemma-3-1b-it-GGUFBy default, the CLI would download from Hugging Face, you can switch to other options with the environment variable MODEL_ENDPOINT. For example, you may opt to downloading model checkpoints from ModelScope or other model sharing communities by setting the environment variable, e.g. MODEL_ENDPOINT=https://www.modelscope.cn/.
After downloading a model, use the CLI tools to run it locally - see below.
llama.cpp requires the model to be stored in the GGUF file format. Models in other data formats can be converted to GGUF using the convert_*.py Python scripts in this repo.
The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with llama.cpp:
- Use the GGUF-my-repo space to convert to GGUF format and quantize model weights to smaller sizes
- Use the GGUF-my-LoRA space to convert LoRA adapters to GGUF format (more info: ggml-org/llama.cpp#10123)
- Use the GGUF-editor space to edit GGUF meta data in the browser (more info: ggml-org/llama.cpp#9268)
- Use the Inference Endpoints to directly host
llama.cppin the cloud (more info: ggml-org/llama.cpp#9669)
To learn more about model quantization, read this documentation
-
Run in conversation mode
Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding
-cnvand specifying a suitable chat template with--chat-template NAMEllama-cli -m model.gguf # > hi, who are you? # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today? # # > what is 1+1? # Easy peasy! The answer to 1+1 is... 2!
-
Run in conversation mode with custom chat template
# use the "chatml" template (use -h to see the list of supported templates) llama-cli -m model.gguf -cnv --chat-template chatml # use a custom template llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
-
Constrain the output with a custom grammar
llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:' # {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}
The grammars/ folder contains a handful of sample grammars. To write your own, check out the GBNF Guide.
For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/
A lightweight, OpenAI API compatible, HTTP server for serving LLMs.
-
Start a local HTTP server with default configuration on port 8080
llama-server -m model.gguf --port 8080 # Basic web UI can be accessed via browser: http://localhost:8080 # Chat completion endpoint: http://localhost:8080/v1/chat/completions
-
Support multiple-users and parallel decoding
# up to 4 concurrent requests, each with 4096 max context llama-server -m model.gguf -c 16384 -np 4 -
Enable speculative decoding
# the draft.gguf model should be a small variant of the target model.gguf llama-server -m model.gguf -md draft.gguf -
Serve an embedding model
# use the /embedding endpoint llama-server -m model.gguf --embedding --pooling cls -ub 8192 -
Serve a reranking model
# use the /reranking endpoint llama-server -m model.gguf --reranking -
Constrain all outputs with a grammar
# custom grammar llama-server -m model.gguf --grammar-file grammar.gbnf # JSON llama-server -m model.gguf --grammar-file grammars/json.gbnf
A tool for measuring the perplexity 1 (and other quality metrics) of a model over a given text.
-
Measure the perplexity over a text file
llama-perplexity -m model.gguf -f file.txt # [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ... # Final estimate: PPL = 5.4007 +/- 0.67339
-
Measure KL divergence
# TODO
-
Run default benchmark
llama-bench -m model.gguf # Output: # | model | size | params | backend | threads | test | t/s | # | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | pp512 | 5765.41 ± 20.55 | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | tg128 | 197.71 ± 0.81 | # # build: 3e0ba0e60 (4229)
-
Basic text completion
llama-simple -m model.gguf # Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of
- Contributors can open PRs
- Collaborators will be invited based on contributions
- Maintainers can push to branches in the
llama.cpprepo and merge PRs into themasterbranch - Any help with managing issues, PRs and projects is very appreciated!
- See good first issues for tasks suitable for first contributions
- Read the CONTRIBUTING.md for more information
- Make sure to read this: Inference at the edge
- A bit of backstory for those who are interested: Changelog podcast
If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
- LLaMA:
- GPT-3
- GPT-3.5 / InstructGPT / ChatGPT:
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS, and macOS. It can be used in Swift projects without the need to compile the library from source. For example:
// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "MyLlamaPackage",
targets: [
.executableTarget(
name: "MyLlamaPackage",
dependencies: [
"LlamaFramework"
]),
.binaryTarget(
name: "LlamaFramework",
url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
)
]
)The above example is using an intermediate build b5046 of the library. This can be modified
to use a different version by changing the URL and checksum.
Command-line completion is available for some environments.
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bashOptionally this can be added to your .bashrc or .bash_profile to load it
automatically. For example:
$ echo "source ~/.llama-completion.bash" >> ~/.bashrc- yhirose/cpp-httplib - Single-header HTTP server, used by
llama-server- MIT license - stb-image - Single-header image format decoder, used by multimodal subsystem - Public domain
- nlohmann/json - Single-header JSON library, used by various tools/examples - MIT License
- miniaudio.h - Single-header audio format decoder, used by multimodal subsystem - Public domain
- subprocess.h - Single-header process launching solution for C and C++ - Public domain
