Zero-knowledge, ephemeral file sharing. Files are encrypted in your browser — the server never sees your content.
nullbox is a self-hostable, security-first file sharing platform inspired by Mozilla Send. It lets you share files with full end-to-end encryption, configurable expiry limits, and zero server-side knowledge of file contents.
Core principles:
- Zero-knowledge — AES-GCM 256-bit encryption runs entirely in the browser. The encryption key lives only in the URL fragment (
#) and is never sent to the server. - Ephemeral by design — every file has a download limit and expiry date. Redis TTL handles automatic cleanup.
- API-first — every feature is available via REST API and CLI, not just the web UI.
- Self-hostable — single
docker compose upbrings up the full stack.
| Upload | Share | Download |
|---|---|---|
![]() |
![]() |
![]() |
Browser Server Storage
─────── ────── ───────
1. Generate AES-GCM 256 key
2. Encrypt file + filename
3. POST /api/v1/upload/init ──► Create Redis record
◄── Presigned S3 URL + tokens
4. PUT encrypted blob ──────────────────────────────────────► S3
5. POST /upload/complete ──► Verify S3 object, activate
Share URL: /d/{fileId}#{key}
▲ ▲
│ │
server sees server NEVER
this part sees this part
The #fragment is never included in HTTP requests. The server stores only encrypted blobs and hashed metadata.
| Layer | Technology |
|---|---|
| Framework | Next.js 16 (App Router) |
| Language | TypeScript 5.7 |
| UI | shadcn/ui + Tailwind CSS v4 |
| Encryption | Web Crypto API — AES-GCM 256-bit |
| Storage | AWS S3 / Cloudflare R2 / MinIO |
| Metadata | Redis (ioredis) |
| Auth | NextAuth.js v5 beta |
| CLI | Go (cobra) |
| Deploy | Docker + docker compose |
| API Gateway | Kong (external, handles rate limiting) |
nullbox/
├── app/
│ ├── page.tsx ← Upload page (full flow)
│ ├── layout.tsx ← Root layout + fonts
│ ├── globals.css ← Tailwind v4 + CSS variables
│ ├── share/[fileId]/page.tsx ← Post-upload share page
│ ├── d/[token]/page.tsx ← Download + decrypt page
│ ├── auth/signin/page.tsx ← SSO login page
│ └── api/
│ ├── auth/[...nextauth]/ ← NextAuth handler
│ ├── health/ ← Docker/Kong health check
│ └── v1/
│ ├── upload/init/ ← Create presigned URL + Redis record
│ ├── upload/complete/ ← Verify S3 upload, activate file
│ ├── file/[id]/ ← GET: metadata / POST: download URL
│ ├── file/[id]/revoke/ ← Delete via owner token
│ └── canary/[id]/ ← Honeypot pixel endpoint
├── components/
│ ├── upload-zone.tsx ← Drag & drop zone
│ ├── toggle-group.tsx ← Limit/expiry selectors
│ ├── progress-bar.tsx ← Upload/decrypt progress
│ └── header.tsx ← Logo + navigation
├── lib/
│ ├── crypto.ts ← AES-GCM 256 (browser-only)
│ ├── redis.ts ← All Redis operations + key schema
│ ├── s3.ts ← Presigned URLs + S3 operations
│ ├── tokens.ts ← HMAC tokens (owner, upload, canary)
│ ├── honeypot.ts ← Canary logging (file-based)
│ ├── auth.ts ← NextAuth v5 config
│ ├── config.ts ← Env validation + upload limits
│ └── utils.ts ← cn, formatBytes, formatCountdown
├── types/
│ ├── index.ts ← FileRecord, API types, Redis key schema
│ └── next-auth.d.ts ← Session type augmentation
├── cli/
│ ├── main.go ← Go CLI (upload/info/revoke/config)
│ ├── go.mod
│ └── README.md
├── Dockerfile ← Multi-stage, non-root user
├── docker-compose.yml ← App + Redis + MinIO
├── docker-compose.dev.yml ← Hot reload override
├── .env.example ← All environment variables
└── logs/
└── canary.log ← Honeypot hit log (append-only JSON)
- Docker + docker compose
- (For dev) Node.js 22+, Go 1.22+
# 1. Clone and configure
git clone https://github.com/dotlabshq/nullbox
cd nullbox
cp .env.example .env
# 2. Set required secrets (minimum)
# Edit .env and fill in:
# NEXTAUTH_SECRET=$(openssl rand -hex 32)
# TOKEN_SECRET=$(openssl rand -hex 32)
# 3. Start (includes MinIO for local S3 emulation)
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
# Production build
docker compose up --buildServices:
| Service | URL |
|---|---|
| App | http://localhost:3000 |
| MinIO Console (dev) | http://localhost:9001 (minioadmin / minioadmin) |
| Redis | localhost:6379 |
npm install
cp .env.example .env.local
# Requires: Redis + S3/MinIO configured in .env.local
npm run devAll endpoints are under /api/v1/.
POST /api/v1/upload/init
Request:
{
"filenameEnc": "<AES-GCM encrypted filename, base64url>",
"sizeBytes": 1048576,
"mimeHash": "<SHA-256 hex of MIME type>",
"downloadLimit": 5,
"expiryDays": 7,
"passwordHash": "<optional: SHA-256 hex of password>"
}Response 201:
{
"fileId": "abc123def456gh78",
"uploadUrl": "https://s3.amazonaws.com/...",
"uploadToken": "one-time-completion-token",
"ownerToken": "hmac-signed-revoke-token",
"canaryUrl": "https://nullbox.sh/api/v1/canary/xyz"
}Then:
PUT {uploadUrl} ← Direct to S3, no server involved
POST /api/v1/upload/complete ← { uploadToken, fileId }
GET /api/v1/file/:id ← Metadata (no auth required)
POST /api/v1/file/:id ← { passwordHash? } → { downloadUrl, filenameEnc }
POST /api/v1/file/:id/revoke ← { ownerToken }
GET /api/v1/canary/:id ← Honeypot pixel (returns 1x1 PNG)
GET /api/health ← { status, redis, timestamp, version }
# Build
cd cli && go build -o nullbox . && sudo mv nullbox /usr/local/bin/
# Configure
nullbox config server https://nullbox.sh
nullbox config apikey sk_your_api_key # Phase 3
# Upload
nullbox upload report.pdf
nullbox upload secret.zip --downloads 1 --days 3
nullbox upload keys.txt --password "passphrase"
nullbox upload artifact.tar.gz --json
# → {"url":"https://nullbox.sh/d/abc#KEY","fileId":"...","ownerToken":"..."}
# Info & revoke
nullbox info https://nullbox.sh/d/abc123#KEY
nullbox revoke <fileId> <ownerToken>| Threat | Mitigation |
|---|---|
| Server breach → file exposure | Zero-knowledge AES-GCM 256 encryption |
| Brute-force download links | HMAC-signed tokens with embedded expiry |
| Race condition on download limit | Atomic Redis HINCRBY |
| Upload token replay | One-time token via Lua atomic GET+DEL |
| Oversized upload | Content-Length enforcement on presigned URL |
| Metadata leakage | Filename encrypted, MIME type SHA-256 hashed |
| Canary / link leak detection | Honeypot pixel, optional webhook alert |
| Password brute-force | SHA-256 client-side + bcrypt server-side |
| Dangerous file types | Configurable extension blocklist |
| Clickjacking | X-Frame-Options: DENY |
| XSS | Content-Security-Policy header |
| Stale files | Redis TTL auto-expiry + S3 lifecycle rules (safety net) |
nullbox:file:{id} → Hash (FileRecord, TTL = expiryDays)
nullbox:upload_token:{tok} → String fileId (TTL = S3 presign + 2min)
nullbox:canary:{id} → String fileId (TTL = expiryDays)
nullbox:user:{userId}:files → ZSet fileIds (score = expiresAt ms)
See .env.example for the full reference.
Required:
| Variable | Description |
|---|---|
REDIS_URL |
Redis connection string |
S3_BUCKET |
S3/R2/MinIO bucket name |
S3_ACCESS_KEY_ID |
S3 credentials |
S3_SECRET_ACCESS_KEY |
S3 credentials |
TOKEN_SECRET |
HMAC secret — min 32 chars |
NEXTAUTH_SECRET |
Auth encryption secret — min 32 chars |
Upload limits (all env-configurable):
| Variable | Default | Description |
|---|---|---|
MAX_FILE_SIZE_MB |
2048 | Max upload size |
MAX_DOWNLOAD_COUNT |
100 | Max selectable download limit |
MAX_EXPIRY_DAYS |
30 | Max expiry duration |
DEFAULT_EXPIRY_DAYS |
7 | Pre-selected expiry |
DEFAULT_DOWNLOAD_COUNT |
5 | Pre-selected download limit |
BLOCKED_EXTENSIONS |
exe,bat,... | Comma-separated blocked extensions |
AUTH_REQUIRED_ABOVE_MB |
0 | Require SSO login above this size (0 = disabled) |
- Upload zone with drag & drop
- Client-side AES-GCM 256-bit encryption (Web Crypto API)
- Direct S3 upload via presigned URLs (encrypted blob never passes through server)
- Redis metadata store with TTL-based auto-expiry
- Download counter (atomic, race-condition safe)
- Password protection (SHA-256 client + bcrypt server)
- One-time upload tokens (Lua atomic GET+DEL)
- HMAC-signed owner tokens for self-service revocation
- Honeypot / canary link system (file-based logging + optional webhook)
- Share page with QR placeholder
- Download page with in-browser decryption
- Security headers (CSP, X-Frame-Options, Referrer-Policy, etc.)
- Multi-stage Dockerfile (non-root user, health check)
- docker compose with MinIO + Redis (auto bucket init)
- Go CLI (upload / info / revoke / config)
- SSO scaffolding (NextAuth v5, Google + GitHub + OIDC)
- Health check endpoint
-
Magic bytes validation Verify real file signature on server after upload, not just extension. Block
.exedisguised as.jpg. Usefile-typenpm package. Compare result against declaredmimeHash. -
Async virus scanning Add
BullMQqueue + Redis. File stayspendinguntil scan completes. Support ClamAV (self-hosted) and VirusTotal API (env-switched). Quarantine on detection, notifyABUSE_EMAIL, auto-revoke. -
Abuse reporting Wire up the "Report" button on the download page (placeholder exists). Store reports in a Redis sorted set. Auto-takedown after
AUTO_TAKEDOWN_THRESHOLDreports. Admin review queue endpoint at/api/v1/admin/reports. -
DMCA / takedown form Public form at
/dmca. Submit → email toABUSE_EMAIL+ Redis record. -
QR code Replace SVG placeholder with
qrcode.reactin share page. -
hCaptcha invisible Add on download page to throttle automated access. Env-switchable.
-
S3 lifecycle rule setup Document and automate a safety-net lifecycle rule that deletes objects older than
MAX_EXPIRY_DAYS + 1 day(catches Redis eviction edge cases). -
Admin dashboard Protected page at
/admin(gated byADMIN_SECRET). Shows: active file count, pending reports, scan queue status, canary hits.
-
Enforce SSO for large files Check
AUTH_REQUIRED_ABOVE_MBin/upload/init. Return401with{"error": "login_required"}if threshold exceeded and no session. -
User file dashboard Authenticated users see their active files, download counts, remaining TTL. Backed by
nullbox:user:{userId}:filesZSet (already implemented in Redis layer). -
SAML / enterprise OIDC Uncomment and test OIDC provider in
lib/auth.ts. Validate with Keycloak (docker compose service for testing). -
API key management Dashboard UI to generate/revoke API keys. Keys stored as
nullbox:apikey:{hash}→{ userId, scopes, createdAt }in Redis. Scopes:upload,read,revoke,admin. Pass asAuthorization: Bearer <key>. -
CLI: API key support
nullbox config apikey <key>already scaffolded. Wire up to server auth. -
CLI: binary releases GitHub Actions matrix build for linux/darwin/windows × amd64/arm64. Publish to GitHub Releases. Add install script.
-
CLI: natural language expiry
--expires "next friday"usinggithub.com/olebedev/when. -
OpenAPI spec Auto-generate from route types, expose at
/api/v1/openapi.json. Add Swagger UI at/api/docs. -
File size UX fix Download page shows encrypted blob size. Show estimated plaintext size (encrypted - 12 byte IV - 16 byte GCM tag ≈ plaintext).
All AI features require
AI_ENABLED=truein env and an active SSO session. Users explicitly opt in via account settings. No AI runs on unauthenticated uploads.
-
Smart defaults Suggest expiry/limit based on file type detected client-side. PDF → 7d/5dl · Archive → 3d/1dl · Image → 30d/∞
-
Secure share assistant Optional text field: "Who are you sending this to?" Claude API call → risk assessment + recommended settings.
-
Auto-description Generate a safe, non-identifying share message to use in place of filename in link previews and emails. Prevents metadata leakage in social previews.
-
Upload anomaly detection Analyze upload patterns without touching encrypted content: file size distribution, upload velocity, time-of-day, account age. Flag accounts for review. Model runs server-side, SSO users only.
-
CLI: context-aware suggestions
nullbox upload --context "sending to client"→ AI suggests--downloads 1 --days 3 --password.
- Fork the repo
- Create a feature branch:
git checkout -b feat/your-feature - Follow the conventions in AGENTS.md
- Type check:
npm run type-check - Lint:
npm run lint - Open a PR with a clear description of what and why
MIT
zero-knowledge · open source · self-hostable


