Embedded retrieval engine for memory-first applications.
One binary. One process. Zero network hops.
Most retrieval stacks bolt together separate services for vectors, text, and graphs — network hops, consistency gaps, and operational complexity that doesn't belong on a phone. Oneiron runs in-process as a Rust library with C FFI bindings. Every query touches a single LMDB environment with ACID transactions. Embed it on iOS, Android, desktop, or a server.
| Signal | Engine | What it finds |
|---|---|---|
| Vector | HNSW (flat NSW), SIMD-accelerated | Semantically similar content |
| Text | BM25 inverted index | Exact keywords and phrases |
| Graph | Personalized PageRank over typed edges | Relationally connected entities |
| Temporal | Bi-temporal range indexes | Events by when they happened or were recorded |
| Phonetic | Code-based posting lists | Fuzzy matches from voice/ASR misspellings |
Any subset of signals can be combined via Reciprocal Rank Fusion with per-signal boosts.
use oneiron::{Vault, VaultConfig, EntityId};
let config = VaultConfig::device();
let vault = Vault::open("./my-vault", config)?;
let id = EntityId::now();
vault.put_entity(&id, b"msgpack blob")?;
vault.put_vector(&id, &embedding)?;cargo build --release
cargo test18 LMDB databases per vault. Atomic multi-database writes via BatchBuilder. MessagePack entity blobs. Context packing into LLM-ready formats.
Full details in the design docs:
SCHEMA-DESIGN.md— database layout, key formats, encodingBUILD-PROMPT.md— architecture, algorithms, API surfaceDEPLOYMENT.md— multi-vault deployment, ML infrastructure
Apache 2.0