TurboQuant (ICLR 2026) vector quantization for memory/RAG embedding compression | 5-8x压缩 98%+召回率 | numpy only, no GPU
-
Updated
Mar 27, 2026 - Python
TurboQuant (ICLR 2026) vector quantization for memory/RAG embedding compression | 5-8x压缩 98%+召回率 | numpy only, no GPU
A zero-dependency, BP-free Forward-Only Neural Network using Dual-Rail Positive ($\mathbb{R}^+$) logic and Hadamard Gating. Where the world is the training set and updates are the Logos.2. 標籤 (Topics)
Add a description, image, and links to the hadamard-transform topic page so that developers can more easily learn about it.
To associate your repository with the hadamard-transform topic, visit your repo's landing page and select "manage topics."