Claude analyzes. Gemini continues. Harbor remembers.
Shared context for AI agents — any model, any client, one protocol.
Website · Quick Start · The Problem · Cross-Agent Memory · MCP Integration · Build a Connector · Agent Docs · 中文
Works with Claude Code · Gemini CLI · Codex · Cursor · OpenClaw · Minimax · Any MCP client · Any Function Calling LLMs
Harbor is an open-source agent context protocol by oSEAItic. Not affiliated with CNCF Harbor (container registry).
Claude Code analyzes your crypto portfolio. You close the session. Later, a different agent picks up where Claude left off — no copy-paste, no re-prompting.
"Market Snapshot (from Claude Code's analysis 17 mins ago)" — a completely different agent, reading Claude's work from Harbor memory. Zero re-prompting.
Under the hood:
Right: Claude Code saves its analysis via harbor remember. Left: the next agent automatically receives it as meta.context.
Every agent on your machine shares the same memory layer. When one agent saves an insight, every future call to that connector carries it forward — across sessions, across models, across tools.
Three ways to install — pick whichever fits:
1. One-line install:
curl -fsSL https://harbor.oseaitic.com/install | bash2. Agent Skill / Plugin:
Claude Code / Codex / Cursor / Gemini CLI
claude plugin marketplace add oSEAItic/harbor && claude plugin install harbor@harbor-marketplaceOr point any agent that supports Agent Skills at skills/harbor/SKILL.md.
OpenClaw
# Install Harbor CLI first
go install github.com/oseaitic/harbor/cmd/harbor@latest
# Install the OpenClaw plugin
openclaw plugins install github.com/oSEAItic/harbor/plugins/harbor-openclaw --linkThe plugin auto-provisions a free cloud account (50 memories) on first use. Adds harbor_remember + harbor_recall tools to your agents, syncs context to workspace on session start, and captures insights before compaction.
Run harbor cloud disable to opt out of cloud sync (local-only mode).
See plugins/harbor-openclaw/README.md for details.
3. Paste into your agent (zero install — agent does everything):
Set up Harbor for this project — instructions at github.com/oSEAItic/harbor/blob/main/AGENTS.md
The agent reads AGENTS.md, installs Harbor, configures MCP, and starts using it — no manual setup needed.
Manual MCP config
Add to your MCP config (claude_desktop_config.json, .cursor/mcp.json, etc.):
{
"mcpServers": {
"harbor": {
"command": "harbor",
"args": ["mcp"]
}
}
}Try a connector:
harbor install coingecko
harbor get coingecko.prices --param ids=bitcoin --param vs_currencies=usdCall any API through Harbor's credential store — your agent never sees raw API keys:
harbor fetch https://api.github.com/repos/oSEAItic/harbor --auth github-patOr via MCP tool: harbor_http(url="...", auth="github-pat"). Responses go through the full pipeline — memory, schema learning, context injection.
Already using an MCP server? Wrap it with Harbor — one line, no code changes:
{
"mcpServers": {
"notion": {
"command": "harbor",
"args": ["proxy", "notion-mcp-server"]
}
}
}Harbor re-discovers upstream tools. The agent teaches curation via plain text hints. Every future call is curated automatically.
Agent frameworks focus on orchestration — which tool to call, when to loop, how to plan. Nobody focuses on what happens after the tool call returns.
Three things go wrong:
Inconsistency. Every API returns a different shape. The model wastes intelligence decoding format instead of reasoning about content.
Noise. A 200-field response might contain 6 fields the agent needs. The rest dilutes attention and inflates cost.
Amnesia. Agent A analyzes data, then the session ends. Agent B starts fresh — re-fetching, re-analyzing, re-reasoning. There's no shared memory between agents, sessions, or models.
Harbor solves all three. Three pillars:
Every response becomes data[] + meta{} + errors[]. The agent parses one format, always knows where data came from, and never fails silently.
The agent teaches Harbor what matters by calling harbor_learn_schema. Harbor remembers permanently. Four layers of the same data:
| Layer | Content | Use case |
|---|---|---|
raw |
Original API response | Full fidelity debugging |
normalized |
Structured data[] |
Standard agent reasoning |
compact |
Summary fields only | Token-efficient contexts |
summary |
Natural language one-liner | Quick scanning, planning |
Harbor controls which fields enter the context window based on who's asking. This isn't API-level access control ("can you call this endpoint?") — it's context-level access control ("what do you see when you call it?"). An agent that can't see a field can't leak it.
Harbor never calls an LLM internally. The connected agent is the LLM:
- First call — Harbor returns raw output with a hint:
[Harbor: No schema for "list_files". Call harbor_learn_schema to enable curation.] - Agent teaches — calls
harbor_learn_schemawithsummary_fieldsandsummary_template - Stored permanently — all future calls are curated. Every agent on the machine benefits.
- Drift detection — if upstream changes shape, Harbor detects and re-learns.
Memory is organized by topic, not by connector. Agents save what they learned, not where they learned it:
harbor_remember(topic="ws-reconnect", # Save by topic
note="Root cause: no backoff in ws.go",
connector="kuse-hive", # Optional scope
refs=["mem_abc123"]) # Link related notes
harbor_recall(query="websocket") # Search by keyword
harbor_recall(id="mem_abc123") # Retrieve specific note
harbor_remember(topic="billing", note="...") # Global note (no connector)
Notes from the same agent session are automatically grouped by session_id. Reference edges between notes form a knowledge graph — agents declare which notes are related via --refs.
harbor forget mem_abc123 # Delete by ID
harbor forget --topic ws-reconnect # Delete by topic
harbor forget --connector kuse-hive --confirm # Bulk deleteStore API keys in Harbor's encrypted keychain — tools never see raw secrets:
harbor auth tavily # Store credential (interactive or browser setup)
harbor auth get tavily # Retrieve for tool use (stdout)
harbor auth sync # Sync cloud → local keychain
harbor auth list # List all stored credentialsTwo modes for tools:
# Header-based APIs (most REST APIs) — Harbor injects automatically
harbor fetch https://api.github.com/user --auth github-pat
# Body/custom APIs — tool retrieves key, decides injection format
KEY=$(harbor auth get tavily)
curl -X POST api.tavily.com/search -d "{\"api_key\":\"$KEY\",\"query\":\"test\"}"Inject into MCP proxy:
{
"mcpServers": {
"github": {
"command": "harbor",
"args": ["proxy", "--credential", "GITHUB_TOKEN=github-pat",
"npx", "@modelcontextprotocol/server-github"]
}
}
}Raw CoinGecko response — no schema, no source, no memory:
{"bitcoin":{"usd":67234.12,"usd_market_cap":1320984173209,"usd_24h_vol":28394857234,
"usd_24h_change":2.34,"last_updated_at":1707900000},"ethereum":{"usd":3456.78,...}}Through Harbor — structured, attributed, with cross-session context:
{
"data": [
{ "id": "bitcoin", "price_usd": 67234.12, "change_24h": 2.34 },
{ "id": "ethereum", "price_usd": 3456.78, "change_24h": -0.82 }
],
"meta": {
"source": "coingecko",
"schema": "crypto.prices.v1",
"fetched_at": "2026-03-05T12:00:00Z",
"context": { "summary": "BTC dominance rising. SOL volatile." },
"recalls": [{ "resource": "coingecko.prices", "age": "1d", "summary": "BTC at $71k..." }]
},
"errors": []
}Sync credentials, schemas, and memories across machines. Free tier available — no signup required:
harbor cloud enable # Auto-provision free account (50 memories, zero config)
harbor cloud status # Check connection
harbor cloud disable # Opt out (local-only mode)Or sign in for unlimited:
harbor login # Sign in with email
harbor auth my-connector # Store API key (client-side encrypted)
harbor auth sync my-connector # Pull credentials on another machine
harbor publish my-connector # Publish private connectorCredentials are encrypted client-side with AES-256-GCM. The server stores only ciphertext.
Harbor ships a native OpenClaw plugin that enhances OpenClaw's memory system:
| Feature | OpenClaw native | + Harbor plugin |
|---|---|---|
| Memory organization | Raw text dump to MEMORY.md | Topic-first, structured, deduplicated |
| Cross-device | Local files only | Cloud sync via Harbor Cloud (free tier: 50 memories) |
| Cross-agent | Per-agent workspace | Shared memory pool across agents |
| Knowledge graph | None | Ref edges between related insights |
| Context loss | Lost on compaction | Auto-captured before compaction |
harbor_remember+harbor_recalltools — registered as native OpenClaw agent tools- Session start hook — writes Harbor's curated context to
memory/harbor-context.md, auto-indexed by OpenClaw's file watcher - Pre-compaction hook — captures session context via
harbor rememberbefore it's lost to compaction - Auto-provision — creates a free cloud account on first use (opt out with
harbor cloud disable)
go install github.com/oseaitic/harbor/cmd/harbor@latest
openclaw plugins install github.com/oSEAItic/harbor/plugins/harbor-openclaw --linkIn OpenClaw config (plugins.entries.harbor.config):
{
"autoSync": true,
"autoCapture": true,
"contextFile": "memory/harbor-context.md"
}Connectors are exec plugins — standalone programs that read arguments and write JSON to stdout. Build in any language.
import { parseArgs, output, buildMeta, handleDescribe } from "@oseaitic/harbor-sdk";
const TOOL_SCHEMAS = [{
type: "function",
function: {
name: "myapi_search",
description: "Search for items",
parameters: {
type: "object",
properties: { query: { type: "string" } },
required: ["query"],
},
},
}];
async function main() {
if (handleDescribe(TOOL_SCHEMAS)) return;
const { resource, params, auth } = parseArgs();
const raw = await fetch(`https://api.example.com/search?q=${params.query}`);
const data = await raw.json();
output({
data: data.items.map((item: any) => ({
id: item.id, title: item.name, score: item.relevance,
})),
meta: buildMeta({ source: "myapi", connector_version: "0.1.0", schema: "myapi.search.v1" }),
raw: null,
errors: [],
});
}
main();See CONNECTOR_SPEC.md for the full interface contract.
Harbor ships with AGENTS.md — structured instructions that any AI agent can read and act on. It covers every CLI command, every MCP tool, decision trees for common workflows, and error recovery patterns.
If your tool is agent infrastructure, your documentation should be agent-native too.
Agent A (Claude Code) Agent B (Gemini) Agent C (Cursor)
\ | /
\ | /
+----------- MCP / tool call --------------+
|
v
Harbor
Normalize --> Curate --> Govern
Schema Learning <--> Drift Detection
Memory Store <--> Cross-Session Recall
|
v
Connectors + MCP Servers (any source)
|
v
APIs / Databases / Services
Apache 2.0 — see LICENSE.
Built by oSEAItic
Data flows like the ocean. Harbor is where agents meet that ocean.


