One-liner: A self-hosted, dockerized agent control plane that combines a local LLM planning layer (optional Ollama / Mistral-class models) with Bitcoin Lightning–style micropayments (simnet mocks + L402-inspired HTTP).
This is a TypeScript reference you can run locally or in Docker. It tracks the public narrative around AI + Lightning: agents paying for APIs without traditional signup rails, as in Lightning Labs’ Lightning Agent Tools (overview, L402, builder resources). Real lnd is not bundled; mocks keep the stack reproducible for learning and CI.
| Piece | Purpose |
|---|---|
gRPC Agent service |
ExecuteSkill, RunNaturalLanguage, ListSkills, GetBudgetStatus, StreamActivity — proto/agent.proto. |
| Seven skills | Same IDs as the toolkit: lnd, lightning-security-module, macaroon-bakery, lnget, aperture, lightning-mcp-server, commerce. |
| LLM planner | LLM_MODE=stub (fast keyword routing) or LLM_MODE=ollama (HTTP /api/chat to Ollama for Llama/Mistral-style JSON plans). |
| Budget policy | LN wallet policy (analogue to capping spend in a “budget contract”): max sats per action and per UTC day before the mock may settle an invoice. |
| Weekly scheduler | UTC weekday + time rules; every minute, fires RunNaturalLanguage (disable with DISABLE_WEEKLY_SCHEDULER=1). |
lightning-mock |
Invoices (lnmock:<uuid>), pay, balance, payment-hash verification. |
api-provider-mock |
HTTP 402 + invoice; retry with Authorization: Bearer <payment_hash>. |
| Dashboard | Budget, payment log, skills, full activity, NL task, schedule, manual skill runs. |
Matches the documented pattern: challenge (402) → pay invoice → proof/token → authenticated retry (toolkit L402 notes, commerce loop). Production stacks use tools like lnget to parse 402 challenges, pay, cache tokens, and retry automatically. This repo’s mock uses Authorization: Bearer <payment_hash> and verifies against lightning-mock—same flow, simplified wire format so you can focus on agent skills. On mainnet Lightning, workloads often stream many small HTLC settles; here one HTTP round-trip stands in for that per resource while keeping the same skill split (lnget, aperture, commerce).
cd agent-ln
npm ci
npm run verify # build + full test suite
# terminal 1
npm run start:lightning-mock
# terminal 2
npm run start:api-mock
# terminal 3
npm startOpen http://127.0.0.1:8080. gRPC on 50051.
- Install Ollama and pull a model (e.g.
llama3.2or a Mistral tag). - Start the agent with:
LLM_MODE=ollama OLLAMA_URL=http://127.0.0.1:11434 OLLAMA_MODEL=llama3.2 npm startIf Ollama is down or returns bad JSON, the planner falls back to the stub router.
| Variable | Default | Description |
|---|---|---|
HTTP_PORT |
8080 |
Dashboard + REST |
GRPC_PORT |
50051 |
gRPC |
LIGHTNING_MOCK_URL |
http://127.0.0.1:9101 |
Mock LN |
API_PROVIDER_URL |
http://127.0.0.1:9102 |
L402 mock API |
MAX_PER_ACTION_SATS |
50000 |
Per-action budget cap |
DAILY_CAP_SATS |
500000 |
Daily budget cap (UTC) |
AGENT_NODE_NAME |
agent-ln-sim |
Node alias in skills |
LLM_MODE |
stub |
stub or ollama |
OLLAMA_URL |
http://127.0.0.1:11434 |
Ollama base URL |
OLLAMA_MODEL |
llama3.2 |
Model name |
DISABLE_WEEKLY_SCHEDULER |
(unset) | Set to 1 to disable the minute ticker |
PROTO_PATH |
<cwd>/proto/agent.proto |
Proto path (Docker sets /app/proto/agent.proto) |
docker compose up --buildThen http://localhost:8080. Ports 9101, 9102, 8080, 50051.
grpcurl -plaintext -proto proto/agent.proto -d '{"skill":"lnd","payload_json":"{}"}' \
127.0.0.1:50051 agentln.Agent/ExecuteSkillNatural language (plans + executes one skill):
grpcurl -plaintext -proto proto/agent.proto \
-d '{"instruction":"Order flowers every Friday"}' \
127.0.0.1:50051 agentln.Agent/RunNaturalLanguagecurl -s http://127.0.0.1:8080/api/budget | jq .
curl -s -X POST http://127.0.0.1:8080/api/execute \
-H 'Content-Type: application/json' \
-d '{"skill":"commerce","payload":{}}' | jq .
curl -s -X POST http://127.0.0.1:8080/api/task \
-H 'Content-Type: application/json' \
-d '{"instruction":"Buy the premium L402 feed"}' | jq .
curl -s -X POST http://127.0.0.1:8080/api/task \
-H 'Content-Type: application/json' \
-d '{"instruction":"Sell 10 GB of unused storage"}' | jq .
curl -s http://127.0.0.1:8080/api/payments | jq .
curl -s -X POST http://127.0.0.1:8080/api/schedule \
-H 'Content-Type: application/json' \
-d '{"weekday_utc":5,"hour_utc":9,"minute_utc":0,"instruction":"Buy premium data"}' | jq .
curl -s http://127.0.0.1:8080/api/schedule | jq .npm testCI-friendly check (TypeScript + Vite + Vitest):
npm run verifyEnd-to-end (starts mocks + agent on ports 39101 / 39102 / 38080, runs commerce, checks payment log, runs NL storage seller — does not use default 8080):
npm run smokeFull gate (unit + build + e2e):
npm run verify:allagent-ln/
proto/agent.proto
scripts/e2e-smoke.mjs # optional HTTP integration gate
src/
server.ts
agent/nlRunner.ts
llm/ # stub planner + Ollama JSON planner
scheduler/weekly.ts
skills/
budget/
payments/extract.ts # payment log projection from activity
bin/ # lightning-mock, api-provider-mock
dashboard/
test/
For education and integration testing only: insecure gRPC, mock money, no chain. For production, use TLS, macaroon / LNC patterns from Lightning Agent Tools, and remote signers—not this demo topology.
MIT
Telegram: @AuraTerminal