Prerequisites
Feature Description
Google recently announced TurboQuant - a new quantization method which compresses KV cache using polar coordinates, shrinking memory requirements.
https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/
Results with MLX seems to be promising as well https://x.com/i/status/2036611007523512397
Motivation
This would allow to run bigger models on smaller hardware
Possible Implementation
I'm not submitting a PR because I'm literally playing with it with claude now, but if it can help I'm experimenting at https://github.com/mudler/llama.cpp/tree/feat/turbo-quant and currently builds/starts correctly. Still evaluating it.
Prerequisites
Feature Description
Google recently announced TurboQuant - a new quantization method which compresses KV cache using polar coordinates, shrinking memory requirements.
https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/
Results with MLX seems to be promising as well https://x.com/i/status/2036611007523512397
Motivation
This would allow to run bigger models on smaller hardware
Possible Implementation
I'm not submitting a PR because I'm literally playing with it with claude now, but if it can help I'm experimenting at https://github.com/mudler/llama.cpp/tree/feat/turbo-quant and currently builds/starts correctly. Still evaluating it.