The world's first Semantic Density engine for Agentic AI
Eliminating 30–90% of token noise with Zero Semantic Loss.
Transforming chaotic tool output into pure, high-density signal · Powered by Zig + Wasm.
OMNI integrates seamlessly with popular AI coding assistants:
| Agent | Command | Filters |
|---|---|---|
| Claude Code | omni generate claude-code |
Git, Docker, npm |
| Codex CLI | omni generate codex |
25 polyglot filters |
| Antigravity | omni generate antigravity |
Google Cloud, k8s |
| OpenCode AI | omni generate opencode |
67 AI coding filters |
AI agents running on Model Context Protocol (MCP) are limited by the quality of the signal they receive. When Claude runs git diff, docker build, or npm install, it is often flooded with "noise"—redundant lines that dilute its reasoning capacity and bloat your context window.
OMNI is the Semantic Core. It sits between your agent and its tools, refining chaotic streams into high-density intelligence. Our goal isn't just to send fewer tokens, but to ensure every token sent is high-signal.
- Zero Semantic Loss — We don't just truncate; we distill. Your AI gets the full context, without the fluff.
- 30% – 90% Token Efficiency — Achieve massive context savings while improving reasoning signal.
- Semantic Confidence Scoring — Every token is analyzed and routed: Keep, Compress (Summarize), or Drop.
- Cleaner Signal, Better Reasoning — Benchmarks prove LLMs perform better with 50 pure tokens than 500 noisy ones.
- < 1ms Engine Latency — Zero-overhead distillation powered by Zig 0.15.2.
- Trust Boundary — Military-grade security filters with SHA-256 verification.
OMNI provides a powerful, multi-purpose CLI that consolidates all diagnostic and reporting tools:
| Subcommand | Purpose |
|---|---|
distill |
The core semantic engine (default behavior via stdin). |
density |
Analyzes context gain and "Information per Token" metrics. |
monitor |
Unified dashboard for system status, savings trend, and opportunity scanner. |
bench |
High-speed benchmark for semantic throughput. |
generate |
Outputs templates for Claude Code, Codex, Antigravity, OpenCode. |
setup |
Interactive guide for integration and standard aliasing. |
update |
Check for the latest version from GitHub Releases. |
uninstall |
Remove OMNI and clean up all MCP configurations. |
OMNI sits between your AI agent and the outside world — silently distilling chaotic output into pure, high-density signal.
graph TD
subgraph Output ["Your Tool Output (Noisy)"]
A["git diff / status<br/>docker build / logs<br/>kubectl get pods<br/>aws ec2 describe<br/>terraform plan<br/>npm install / audit<br/>etc"]
end
subgraph OMNI ["OMNI MCP SERVER"]
direction TB
B["LRU Cache<br/>< 1ms hit"]
C["Filter Engine (Zig + Wasm)<br/>Semantic Distillation"]
D["Pure Signal<br/>Refined Context<br/>(30–90% token reduction)"]
E["Metrics & Density<br/>(Performance Report)"]
B --> C
C --> D
D --> E
end
A -->|"stdin pipe"| B
E -->|"Pure Signal<br/>Refined Context"| F["AI Agent Platform (Claude/Codex/Antigravity/OpenCode)<br/>Zero Noise reasoning"]
%% Theme-agnostic Professional Styling
style OMNI fill:#1d2b3a,stroke:#334155,stroke-width:2px,color:#f8fafc
style Output fill:#0f172a,stroke:#1e293b,stroke-width:2px,color:#f8fafc
style A fill:#3b82f6,stroke:#60a5fa,color:#fff
style B fill:#8b5cf6,stroke:#a78bfa,color:#fff
style C fill:#06b6d4,stroke:#22d3ee,color:#fff
style D fill:#10b981,stroke:#34d399,color:#fff
style E fill:#f59e0b,stroke:#fbbf24,color:#fff
style F fill:#f59e0b,stroke:#fbbf24,color:#fff
%% Link Styling (Arrows)
linkStyle default stroke:#94a3b8,stroke-width:2px
No filter match → passthrough unchanged (zero overhead)
Before OMNI (LLM sees 600+ tokens of noise):
$ docker build .
Step 1/15 : FROM node:18
---> 4567f123
Step 2/15 : RUN npm install
... (500 lines of noise) ...
Successfully built 1234abcd
After OMNI Distillation (LLM sees 15 tokens of pure signal):
Step 1/15 : FROM node:18
Step 2/15 : RUN npm install (CACHED)
Step 3/15 : COPY . .
Successfully built!
That's 98% fewer tokens. The LLM gets the same signal — all builds pass — without the noise.
OMNI is a standard Model Context Protocol (MCP) server.
The OMNI CLI is for humans, but omni-mcp is for your AI. It allows Claude or Antigravity to use OMNI's distillation tools automatically.
To register OMNI as an MCP server for Claude Code automatically, run:
omni generate claude-codeThis command will automatically detect your absolute home path, register OMNI with Claude Code, and merge the polyglot coding filter bundle into your global ~/.omni/omni_config.json.
Verify with:
claude mcp listTo register OMNI as an MCP server for Codex automatically, run:
omni generate codexThis command uses the local Codex CLI integration flow, registers OMNI with codex mcp add, and automatically merges the codex-polyglot filter bundle into your global ~/.omni/omni_config.json.
Verify with:
codex mcp listomni generate antigravityThis command will automatically locate your ~/.gemini/antigravity/mcp_config.json, safely merge OMNI's configurations into your existing servers without overwriting them, save the file, and add cloud-native filters for Kubernetes, Terraform, and Docker layers into your global ~/.omni/omni_config.json.
omni generate opencodeThis command:
- Registers OMNI as an MCP server in
~/.config/opencode/opencode.json - Installs 67 AI Coding filters for maximum token efficiency
- Supports: npm, yarn, pnpm, bun, tsc, eslint, prettier, vite, webpack, next.js, jest, vitest, pytest, ruff, mypy, cargo, go, docker, kubectl, terraform, gradle, and 50+ more tools
Token-Efficient AI Coding Setup:
# Setup OpenCode integration
omni generate opencode
# Then restart OpenCode and verify
opencode mcp list
# Test distillation
git diff | omni
npm run build 2>&1 | omniUse the CLI to generate ready-to-paste configurations:
omni generate claude-code # For Claude Code / Claude CLI
omni generate codex # For Codex CLI
omni generate antigravity # For Google Antigravity
omni generate opencode # For OpenCode AI
omni setup # Full interactive guideTo ensure your AI agent always uses OMNI for maximum token efficiency, add these Custom Instructions to your agent's settings (e.g., in .claudebashrc, Codex workflow notes, or Antigravity custom instructions):
OMNI Directive: You are equipped with the OMNI Semantic Distillation layer. Always prioritize
omni_execute,omni_read_file, andomni_list_dirfor environment interactions. OMNI is the source of truth for high-density signal. Use it to preserve your context window and speed up your reasoning.
By providing these instructions, the agent will favor OMNI's tools, saving you up to 90% in token costs automatically.
OMNI provides 67 specialized filters for OpenCode AI coding workflows, maximizing token efficiency across all development tools.
omni generate opencode| Category | Tools | Example Output |
|---|---|---|
| Package Managers | npm, yarn, pnpm, bun, pip, poetry, composer, bundle | npm: 42 packages added | 3.2s |
| TypeScript/JS | tsc, eslint, prettier, vite, webpack, next.js | tsc: 0 errors |
| Testing | jest, vitest, pytest, cypress, playwright | jest: 25/25 passed |
| Python | ruff, mypy, black, isort | ruff: all checks passed |
| Systems | cargo, go, zig, dotnet | cargo: finished | 4.2s |
| Containers | docker, docker-compose, kubectl, helm | docker: built abc123 |
| Infrastructure | terraform, ansible, packer, skaffold | terraform: 3 add | 1 change |
| Mobile | gradle, android, flutter, react-native | gradle: BUILD SUCCESSFUL |
| Security | trivy, semgrep, gitleaks, snyk | trivy: 5 vulnerabilities |
Before OMNI (600+ tokens):
$ npm install
added 156 packages in 8s
78 packages are looking for funding
run `npm fund` for details
npm warn deprecated
npm warn invalid
...
After OMNI (15 tokens):
npm: 156 packages added | 8s
Add to your OpenCode configuration for maximum efficiency:
{
"instructions": [
"Always pipe tool output through OMNI for distillation",
"Use: command | omni for git, docker, npm, pytest outputs"
]
}OMNI serves as the Intelligent Nerve Center for your development environment, acting as a high-performance wrapper that ensures only high-value information reaches your AI.
Transform any native command into an AI-ready signal instantly. OMNI intercepts the stream and refines it in real-time without adding overhead.
omni -- git status
# Result: Aggregated repository health (30x more dense)
omni -- docker build .
# Result: Cleaned build layers, surfacing only critical transition states.Leverage the OMNI Engine's specialized algorithms to convert chaotic logs into structured intelligence.
- Precision Rewrite: OMNI doesn't truncate data; it semantically analyzes the stream to retain "intent-critical" details.
- Context Optimization: By compressing 10,000 lines into a 20-line distillation, OMNI effectively expands your AI's reasoning capacity.
Prove the efficiency of the OMNI engine:
omni bench 1000Shows: OMNI processes thousands of requests per second with sub-millisecond latency (< 0.01ms), meaning it adds zero noticeable overhead when used as a proxy.
OMNI exposes high-density tools that replace standard agent context commands:
| Tool | Purpose | Token Saving |
|---|---|---|
omni_list_dir |
Dense, comma-separated directory listing (no JSON overhead). | High |
omni_view_file |
Range-based file reading + Zig distillation. | Massive |
omni_grep_search |
High-density semantic search results. | High |
omni_find_by_name |
Recursive flat file discovery. | Medium |
omni_add_filter |
Add declarative rules without coding. | N/A |
omni_apply_template |
Apply pre-defined bundles (K8s, TF, Node, Codex, Polyglot). | N/A |
omni_execute |
Run ANY command and distill its output. | Massive (30-90%) |
omni_read_file |
Full file distillation (great for logs/SQL/json). | Massive |
omni_density |
Measure gain and reduction metrics. | N/A |
omni_trust |
Trust a project's local omni_config.json before loading. |
N/A |
omni_trust_hooks |
Verify SHA-256 hashes of hook scripts. | N/A |
You can extend OMNI's intelligence without touching a single line of Zig.
The agent will use omni_add_filter to update your configuration instantly. It automatically prioritizes your project-local omni_config.json if it exists, otherwise it updates your global ~/.omni/omni_config.json.
Apply bundles of pre-defined rules for your stack via MCP tool:
omni_apply_template(template="terraform")omni_apply_template(template="codex-advanced")fortsc,eslint,jest, andvitestsummaries commonly produced in Codex-driven workflows.omni_apply_template(template="codex-polyglot")to cover mixed-language Codex loops across JS/TS, Python, Rust, Go, Zig, and pnpm install logs.omni_apply_template(template="opencode-advanced")for comprehensive AI coding tools (npm, yarn, pnpm, tsc, eslint, jest, vitest, docker, kubectl, and 60+ more)- Language templates:
pytest-advanced,ruff-advanced,cargo-test-advanced,pnpm-advanced,zig-advanced,go-test-advanced. - Supported templates:
kubernetes,terraform,node-verbose,docker-layers,security-audit,aws-cloud,codex-advanced,pytest-advanced,ruff-advanced,cargo-test-advanced,pnpm-advanced,zig-advanced,go-test-advanced,codex-polyglot,opencode-advanced.
See the DSL_GUIDE.md for full documentation and examples.
OMNI uses a dual-layer, additive configuration system to provide both global consistency and project-specific flexibility.
| Layer | Path | Purpose |
|---|---|---|
| Global | ~/.omni/omni_config.json |
Your primary rules, shared across all projects and agents. |
| Local | ./omni_config.json |
Project-specific overrides or additional rules (e.g., custom masking for a specific repo). |
- OMNI first loads the Global configuration.
- It then loads the Local configuration (if present in your current directory).
- The rules are combined. This means rules from both your global setup and your specific project will be applied simultaneously.
You can manually edit these files to define rules (exact matching) or dsl_filters (complex semantic logic):
{
"rules": [
{ "name": "mask_token", "match": "api_key:", "action": "mask" }
],
"dsl_filters": [
{ "name": "my-custom-sig", "pattern": "MY_SIGNAL:", "confidence": 1.0 }
]
}Tip
Use omni generate config to output a complete, well-commented starter template for your configuration.
| Event | Action |
|---|---|
| Installation | The install.sh script sets up your global ~/.omni/omni_config.json. |
| AI Tooling | Using MCP tools like omni_add_filter or omni_apply_template will automatically create the file if it doesn't exist. |
| Manual Edit | You can edit both global and local files manually at any time using any text editor. |
| AI Proxy | AI agents can dynamically add project-specific rules via the OMNI MCP interface without you leaving the chat. |
OMNI includes three layers of security to protect your AI agent from malicious inputs:
OMNI will not load project-local omni_config.json until you explicitly trust it. This prevents a cloned repository from injecting malicious filter rules.
How to use:
- Clone or open a project that has
omni_config.json. - OMNI will log:
⚠ Local config not trusted. Run omni_trust to review and trust. - Call the
omni_trustMCP tool — it shows the config contents and SHA-256 hash. - The project is now trusted. If you later edit the config, run
omni_trustagain.
OMNI automatically strips 50+ dangerous environment variables (BASH_ENV, NODE_OPTIONS, LD_PRELOAD, DYLD_INSERT_LIBRARIES, etc.) from all child processes. No configuration needed — this is always active.
Custom hook scripts in ~/.omni/hooks/ are protected by SHA-256 fingerprinting.
How to use:
- Add Hooks: Place your custom scripts in
~/.omni/hooks/. - Verify & Trust: Use the
omni_trust_hooksMCP tool to inspect and approve the scripts. This generates SHA-256 signatures in~/.omni/hooks.sha256. - Startup Protection: Every time OMNI starts, it re-calculates hashes and compares them to the trusted signatures.
- Automatic Lockdown: If any file is modified or an untrusted file is added, OMNI will immediately exit.
Important
Modifying Hook Scripts?
If you edit the content of your hook scripts, OMNI will block startup due to the fingerprint mismatch. You must run the omni_trust_hooks MCP tool again to re-authorize the updated scripts.
- Create: Store your custom script (e.g.,
git-summary.sh) in~/.omni/hooks/. - Test: Verify the script works manually in your terminal.
- Authorize: Run the
omni_trust_hookstool to sign the script's current state. - Execute: Your AI Agent can now safely run the script via
omni_execute:{ "command": "~/.omni/hooks/git-summary.sh" }
To manually audit your hook integrity, you can run:
node dist/index.js --test-integritySEE SECURITY.md FOR MORE DETAILS
OMNI is obsessed with efficiency. Use these tools to see how much you're saving:
Run the monitor to see a breakdown of tokens saved, filtering latency, and efficiency per agent:
omni monitorShows: Total commands processed, efficiency rating, and detailed filter/agent breakdown.
Advanced Views:
omni monitor --trend: Displays an ASCII chart of your daily distillation savings.omni monitor --log: Shows recent tool calls and filtering results in a timeline.omni monitor --by week: Aggregated metrics structured by week (orday,month).omni monitor scan: Analyzes your shell history for tools that could benefit from OMNI.
Measure the "Information per Token" gain for any text file or output:
omni density < build_logs.txtOutput: Calculates the exact Context Density Gain (e.g., 4.5x improvement).
| Pillar | Description | Value |
|---|---|---|
| Purity | Zero Semantic Loss via multi-variable confidence scoring. | Clean Signal |
| Density | Focus on "Information per Token" rather than simple truncation. | High Context |
| Speed | Zig-powered native engine with sub-millisecond response. | < 1ms Latency |
| Trust | SHA-256 verified project-local rules and security boundaries. | Secure |
| Portability | 68KB universal Wasm binary runs on any runtime (Node, Web, Edge). | Universal |
While other tools focus on simple filtering, OMNI provides a full semantic layer:
| Feature | OMNI | Others |
|---|---|---|
| Processing Engine | Zig (Native) | Python / Go / Rust |
| Context Strategy | Semantic Distillation | Regex / Passthrough |
| Wait Overhead | Zero (<1ms) | Visible (10ms - 100ms) |
| Governance | SHA-256 Trust Boundary | None / Manual |
| Deployment | 68KB Wasm / Universal | Large Native Binaries |
- Context IQ: OMNI doesn't just shorten text; it re-writes it semantically for the LLM based on agentic intent.
- Performance Supremacy: By using a persistent Wasm instance, OMNI provides instant responses without blocking the main agent execution.
- Local-First Privacy: Every byte of your code and tool output stays on your machine.
- The "Distillation" Effect: In your AI's tool output, raw logs are transformed into a 10-line summary.
- Faster Response Times: LLM processes 150x fewer tokens, giving you significantly faster replies.
- Real-time Reports: Run
omni monitorat any time to see the global efficiency health. - Density Metrics: Use
omni density < logs.txtto calculate your exact Context Density Gain.
brew install fajarhide/tap/omnicurl -fsSL https://omni.weekndlabs.com/install | shFor manual build instructions, see INSTALL.md.
omni update # Check for the latest version
omni uninstall # Remove OMNI and clean up all configsExplore the full potential of OMNI with these specialized guides:
| Document | Purpose | Audience |
|---|---|---|
| QUICKSTART | Install and run OMNI in 60 seconds. | Everyone |
| CLAUDE.md | Full development guide & repo standards. | Developers |
| TESTS.md | Infrastructure details & how to add tests. | QA & Contributors |
| DSL_GUIDE.md | Create custom semantic rules without coding. | Power Users / Agents |
| INSTALL.md | Manual build and edge deployment instructions. | SysAdmins |
MIT © Fajar Hidayat
