Skip to content

oneiron-dev/oneiron

Repository files navigation

oneiron

Embedded retrieval engine for memory-first applications.

One binary. One process. Zero network hops.


oneiron architecture

Why

Most retrieval stacks bolt together separate services for vectors, text, and graphs — network hops, consistency gaps, and operational complexity that doesn't belong on a phone. Oneiron runs in-process as a Rust library with C FFI bindings. Every query touches a single LMDB environment with ACID transactions. Embed it on iOS, Android, desktop, or a server.

oneiron deployment targets

Signals

Signal Engine What it finds
Vector HNSW (flat NSW), SIMD-accelerated Semantically similar content
Text BM25 inverted index Exact keywords and phrases
Graph Personalized PageRank over typed edges Relationally connected entities
Temporal Bi-temporal range indexes Events by when they happened or were recorded
Phonetic Code-based posting lists Fuzzy matches from voice/ASR misspellings

Any subset of signals can be combined via Reciprocal Rank Fusion with per-signal boosts.

Quick Start

use oneiron::{Vault, VaultConfig, EntityId};

let config = VaultConfig::device();
let vault = Vault::open("./my-vault", config)?;

let id = EntityId::now();
vault.put_entity(&id, b"msgpack blob")?;
vault.put_vector(&id, &embedding)?;

Building

cargo build --release
cargo test

Design

18 LMDB databases per vault. Atomic multi-database writes via BatchBuilder. MessagePack entity blobs. Context packing into LLM-ready formats.

oneiron storage layout

Full details in the design docs:

License

Apache 2.0

About

Long-context AI memory for agents, companions, etc.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published