-
-
Notifications
You must be signed in to change notification settings - Fork 3
Installation
SuperLocalMemory V3 installs via npm, pip, or git clone. All three methods give you the same product — choose whichever fits your workflow.
No desktop app (DMG/EXE) for V3. V3 is a CLI + MCP server, not a GUI application. The V2 desktop installers are deprecated. Use
slm dashboardfor the web UI.
| Requirement | Version | Check |
|---|---|---|
| Python | 3.11+ | python3 --version |
| Node.js (for npm install) | 14+ | node --version |
Python 3.11+ is required for the V3 engine. Node.js is only needed if you install via npm.
One command installs everything — CLI, Python dependencies, and MCP server.
npm install -g superlocalmemoryThis automatically:
- Installs the V3 engine and CLI (
slmcommand) - Auto-installs Python dependencies (numpy, scipy, networkx, sentence-transformers, torch)
- Creates data directory at
~/.superlocalmemory/ - Detects V2 installations and guides migration
Then set up:
slm setup # Interactive wizard — choose Mode A/B/C, configure provider
slm warmup # Pre-download embedding model (~500MB, one-time)
slm warmupis optional. If you skip it, the model downloads automatically on your firstslm rememberorslm recall.
slm statusYou should see:
SuperLocalMemory V3
Mode: A
Provider: none
Base dir: /home/you/.superlocalmemory
Database: /home/you/.superlocalmemory/memory.db
pip install superlocalmemoryThen run:
slm setup
slm warmup # Optional — pre-download embedding model
slm status # Verifygit clone https://github.com/qualixar/superlocalmemory.git
cd superlocalmemory
pip install -e .Then:
slm setup
slm warmup
slm status| Component | Size | When |
|---|---|---|
| Core math libraries (numpy, scipy, networkx) | ~50MB | During install |
| Search engine (sentence-transformers, einops, torch) | ~200MB | During install |
| Embedding model (nomic-ai/nomic-embed-text-v1.5, 768d) | ~500MB | First use or slm warmup
|
Total disk footprint: ~750MB after first use (mostly PyTorch + embedding model).
RAM usage: ~500-800MB peak during embedding model load, ~20-50MB steady state. CPU-only — no GPU required.
If any dependency fails during install, the installer prints the exact
pip installcommand to fix it. BM25 keyword search works even without embeddings — you're never fully blocked.
npm install -g superlocalmemory
slm setupWorks out of the box. Python 3.11+ is included with Homebrew (brew install python@3.12) or available from python.org.
npm install -g superlocalmemory
slm setupEnsure Python 3.11+ is installed: sudo apt install python3.11 (Ubuntu) or sudo dnf install python3.11 (Fedora).
npm install -g superlocalmemory
slm setupRequires Python 3.11+ from python.org. Add Python to PATH during installation.
After installing, connect to your AI IDE:
{
"mcpServers": {
"superlocalmemory": {
"command": "slm",
"args": ["mcp"]
}
}
}Or auto-configure all detected IDEs:
slm connect # Configure all detected IDEs
slm connect --list # See which IDEs are configuredSee IDE Setup for per-IDE instructions.
If you have V2 (2.8.6 or earlier) installed:
npm install -g superlocalmemory # Installs V3 alongside V2
slm migrate # Migrates V2 data to V3 schemaV3 is a complete architectural reinvention — new mathematical engine, new retrieval pipeline, new storage schema. Your existing data is preserved. A backup is created automatically before migration.
See Migration from V2 for the full guide.
-
npm install: Make sure npm global bin is in your PATH. Run
npm bin -gto find the location. - pip install: Make sure Python scripts directory is in your PATH.
- Ensure Python 3.11+ is the default:
python3 --version - Reinstall:
pip install --force-reinstall superlocalmemory
- Check internet connection
- Try manual warmup:
slm warmup - If behind a proxy, set
HTTP_PROXYandHTTPS_PROXYenvironment variables
- Use
npm install -g superlocalmemory(not sudo) - If npm global directory needs permissions:
npm config set prefix ~/.npm-globaland add~/.npm-global/binto PATH
- Quick Start Tutorial — Your first memory in 2 minutes
- Modes Explained — Choose between A (zero-cloud), B (local Ollama), C (full power)
- CLI Reference — All 14 commands with examples
Part of Qualixar | Created by Varun Pratap Bhardwaj
SuperLocalMemory V3 — Your AI Finally Remembers You. 100% local. 100% private. 100% free.
Part of Qualixar | Created by Varun Pratap Bhardwaj | GitHub
SuperLocalMemory V3
Getting Started
Reference
Architecture
Enterprise
V2 Documentation