Skip to content

dotlabshq/nullbox

Repository files navigation

nullbox

Zero-knowledge, ephemeral file sharing. Files are encrypted in your browser — the server never sees your content.

License: MIT Next.js TypeScript


What is nullbox?

nullbox is a self-hostable, security-first file sharing platform inspired by Mozilla Send. It lets you share files with full end-to-end encryption, configurable expiry limits, and zero server-side knowledge of file contents.

Core principles:

  • Zero-knowledge — AES-GCM 256-bit encryption runs entirely in the browser. The encryption key lives only in the URL fragment (#) and is never sent to the server.
  • Ephemeral by design — every file has a download limit and expiry date. Redis TTL handles automatic cleanup.
  • API-first — every feature is available via REST API and CLI, not just the web UI.
  • Self-hostable — single docker compose up brings up the full stack.

Screenshots

Upload Share Download
Upload page Share page Download page

Architecture

Browser                          Server                        Storage
───────                          ──────                        ───────
1. Generate AES-GCM 256 key
2. Encrypt file + filename
3. POST /api/v1/upload/init  ──► Create Redis record
                             ◄── Presigned S3 URL + tokens
4. PUT encrypted blob ──────────────────────────────────────► S3
5. POST /upload/complete     ──► Verify S3 object, activate

Share URL: /d/{fileId}#{key}
                 ▲                   ▲
                 │                   │
           server sees         server NEVER
           this part           sees this part

The #fragment is never included in HTTP requests. The server stores only encrypted blobs and hashed metadata.


Tech Stack

Layer Technology
Framework Next.js 16 (App Router)
Language TypeScript 5.7
UI shadcn/ui + Tailwind CSS v4
Encryption Web Crypto API — AES-GCM 256-bit
Storage AWS S3 / Cloudflare R2 / MinIO
Metadata Redis (ioredis)
Auth NextAuth.js v5 beta
CLI Go (cobra)
Deploy Docker + docker compose
API Gateway Kong (external, handles rate limiting)

Project Structure

nullbox/
├── app/
│   ├── page.tsx                     ← Upload page (full flow)
│   ├── layout.tsx                   ← Root layout + fonts
│   ├── globals.css                  ← Tailwind v4 + CSS variables
│   ├── share/[fileId]/page.tsx      ← Post-upload share page
│   ├── d/[token]/page.tsx           ← Download + decrypt page
│   ├── auth/signin/page.tsx         ← SSO login page
│   └── api/
│       ├── auth/[...nextauth]/      ← NextAuth handler
│       ├── health/                  ← Docker/Kong health check
│       └── v1/
│           ├── upload/init/         ← Create presigned URL + Redis record
│           ├── upload/complete/     ← Verify S3 upload, activate file
│           ├── file/[id]/           ← GET: metadata / POST: download URL
│           ├── file/[id]/revoke/    ← Delete via owner token
│           └── canary/[id]/         ← Honeypot pixel endpoint
├── components/
│   ├── upload-zone.tsx              ← Drag & drop zone
│   ├── toggle-group.tsx             ← Limit/expiry selectors
│   ├── progress-bar.tsx             ← Upload/decrypt progress
│   └── header.tsx                   ← Logo + navigation
├── lib/
│   ├── crypto.ts                    ← AES-GCM 256 (browser-only)
│   ├── redis.ts                     ← All Redis operations + key schema
│   ├── s3.ts                        ← Presigned URLs + S3 operations
│   ├── tokens.ts                    ← HMAC tokens (owner, upload, canary)
│   ├── honeypot.ts                  ← Canary logging (file-based)
│   ├── auth.ts                      ← NextAuth v5 config
│   ├── config.ts                    ← Env validation + upload limits
│   └── utils.ts                     ← cn, formatBytes, formatCountdown
├── types/
│   ├── index.ts                     ← FileRecord, API types, Redis key schema
│   └── next-auth.d.ts               ← Session type augmentation
├── cli/
│   ├── main.go                      ← Go CLI (upload/info/revoke/config)
│   ├── go.mod
│   └── README.md
├── Dockerfile                       ← Multi-stage, non-root user
├── docker-compose.yml               ← App + Redis + MinIO
├── docker-compose.dev.yml           ← Hot reload override
├── .env.example                     ← All environment variables
└── logs/
    └── canary.log                   ← Honeypot hit log (append-only JSON)

Quick Start

Prerequisites

  • Docker + docker compose
  • (For dev) Node.js 22+, Go 1.22+

Run with Docker

# 1. Clone and configure
git clone https://github.com/dotlabshq/nullbox
cd nullbox
cp .env.example .env

# 2. Set required secrets (minimum)
# Edit .env and fill in:
#   NEXTAUTH_SECRET=$(openssl rand -hex 32)
#   TOKEN_SECRET=$(openssl rand -hex 32)

# 3. Start (includes MinIO for local S3 emulation)
docker compose -f docker-compose.yml -f docker-compose.dev.yml up

# Production build
docker compose up --build

Services:

Service URL
App http://localhost:3000
MinIO Console (dev) http://localhost:9001 (minioadmin / minioadmin)
Redis localhost:6379

Run Locally (without Docker)

npm install
cp .env.example .env.local
# Requires: Redis + S3/MinIO configured in .env.local
npm run dev

API Reference

All endpoints are under /api/v1/.

Upload Flow

POST /api/v1/upload/init

Request:

{
  "filenameEnc": "<AES-GCM encrypted filename, base64url>",
  "sizeBytes": 1048576,
  "mimeHash": "<SHA-256 hex of MIME type>",
  "downloadLimit": 5,
  "expiryDays": 7,
  "passwordHash": "<optional: SHA-256 hex of password>"
}

Response 201:

{
  "fileId": "abc123def456gh78",
  "uploadUrl": "https://s3.amazonaws.com/...",
  "uploadToken": "one-time-completion-token",
  "ownerToken": "hmac-signed-revoke-token",
  "canaryUrl": "https://nullbox.sh/api/v1/canary/xyz"
}

Then:

PUT  {uploadUrl}                   ← Direct to S3, no server involved
POST /api/v1/upload/complete       ← { uploadToken, fileId }

File Access

GET  /api/v1/file/:id              ← Metadata (no auth required)
POST /api/v1/file/:id              ← { passwordHash? } → { downloadUrl, filenameEnc }
POST /api/v1/file/:id/revoke       ← { ownerToken }

Other

GET /api/v1/canary/:id             ← Honeypot pixel (returns 1x1 PNG)
GET /api/health                    ← { status, redis, timestamp, version }

CLI

# Build
cd cli && go build -o nullbox . && sudo mv nullbox /usr/local/bin/

# Configure
nullbox config server https://nullbox.sh
nullbox config apikey sk_your_api_key   # Phase 3

# Upload
nullbox upload report.pdf
nullbox upload secret.zip --downloads 1 --days 3
nullbox upload keys.txt --password "passphrase"
nullbox upload artifact.tar.gz --json
# → {"url":"https://nullbox.sh/d/abc#KEY","fileId":"...","ownerToken":"..."}

# Info & revoke
nullbox info https://nullbox.sh/d/abc123#KEY
nullbox revoke <fileId> <ownerToken>

Security

Threat Mitigation
Server breach → file exposure Zero-knowledge AES-GCM 256 encryption
Brute-force download links HMAC-signed tokens with embedded expiry
Race condition on download limit Atomic Redis HINCRBY
Upload token replay One-time token via Lua atomic GET+DEL
Oversized upload Content-Length enforcement on presigned URL
Metadata leakage Filename encrypted, MIME type SHA-256 hashed
Canary / link leak detection Honeypot pixel, optional webhook alert
Password brute-force SHA-256 client-side + bcrypt server-side
Dangerous file types Configurable extension blocklist
Clickjacking X-Frame-Options: DENY
XSS Content-Security-Policy header
Stale files Redis TTL auto-expiry + S3 lifecycle rules (safety net)

Redis Key Schema

nullbox:file:{id}           → Hash   (FileRecord, TTL = expiryDays)
nullbox:upload_token:{tok}  → String fileId  (TTL = S3 presign + 2min)
nullbox:canary:{id}         → String fileId  (TTL = expiryDays)
nullbox:user:{userId}:files → ZSet   fileIds (score = expiresAt ms)

Environment Variables

See .env.example for the full reference.

Required:

Variable Description
REDIS_URL Redis connection string
S3_BUCKET S3/R2/MinIO bucket name
S3_ACCESS_KEY_ID S3 credentials
S3_SECRET_ACCESS_KEY S3 credentials
TOKEN_SECRET HMAC secret — min 32 chars
NEXTAUTH_SECRET Auth encryption secret — min 32 chars

Upload limits (all env-configurable):

Variable Default Description
MAX_FILE_SIZE_MB 2048 Max upload size
MAX_DOWNLOAD_COUNT 100 Max selectable download limit
MAX_EXPIRY_DAYS 30 Max expiry duration
DEFAULT_EXPIRY_DAYS 7 Pre-selected expiry
DEFAULT_DOWNLOAD_COUNT 5 Pre-selected download limit
BLOCKED_EXTENSIONS exe,bat,... Comma-separated blocked extensions
AUTH_REQUIRED_ABOVE_MB 0 Require SSO login above this size (0 = disabled)

Roadmap

✅ Phase 1 — Core (complete)

  • Upload zone with drag & drop
  • Client-side AES-GCM 256-bit encryption (Web Crypto API)
  • Direct S3 upload via presigned URLs (encrypted blob never passes through server)
  • Redis metadata store with TTL-based auto-expiry
  • Download counter (atomic, race-condition safe)
  • Password protection (SHA-256 client + bcrypt server)
  • One-time upload tokens (Lua atomic GET+DEL)
  • HMAC-signed owner tokens for self-service revocation
  • Honeypot / canary link system (file-based logging + optional webhook)
  • Share page with QR placeholder
  • Download page with in-browser decryption
  • Security headers (CSP, X-Frame-Options, Referrer-Policy, etc.)
  • Multi-stage Dockerfile (non-root user, health check)
  • docker compose with MinIO + Redis (auto bucket init)
  • Go CLI (upload / info / revoke / config)
  • SSO scaffolding (NextAuth v5, Google + GitHub + OIDC)
  • Health check endpoint

🔄 Phase 2 — Security Hardening

  • Magic bytes validation Verify real file signature on server after upload, not just extension. Block .exe disguised as .jpg. Use file-type npm package. Compare result against declared mimeHash.

  • Async virus scanning Add BullMQ queue + Redis. File stays pending until scan completes. Support ClamAV (self-hosted) and VirusTotal API (env-switched). Quarantine on detection, notify ABUSE_EMAIL, auto-revoke.

  • Abuse reporting Wire up the "Report" button on the download page (placeholder exists). Store reports in a Redis sorted set. Auto-takedown after AUTO_TAKEDOWN_THRESHOLD reports. Admin review queue endpoint at /api/v1/admin/reports.

  • DMCA / takedown form Public form at /dmca. Submit → email to ABUSE_EMAIL + Redis record.

  • QR code Replace SVG placeholder with qrcode.react in share page.

  • hCaptcha invisible Add on download page to throttle automated access. Env-switchable.

  • S3 lifecycle rule setup Document and automate a safety-net lifecycle rule that deletes objects older than MAX_EXPIRY_DAYS + 1 day (catches Redis eviction edge cases).

  • Admin dashboard Protected page at /admin (gated by ADMIN_SECRET). Shows: active file count, pending reports, scan queue status, canary hits.


🔜 Phase 3 — Auth, API Keys & CLI Polish

  • Enforce SSO for large files Check AUTH_REQUIRED_ABOVE_MB in /upload/init. Return 401 with {"error": "login_required"} if threshold exceeded and no session.

  • User file dashboard Authenticated users see their active files, download counts, remaining TTL. Backed by nullbox:user:{userId}:files ZSet (already implemented in Redis layer).

  • SAML / enterprise OIDC Uncomment and test OIDC provider in lib/auth.ts. Validate with Keycloak (docker compose service for testing).

  • API key management Dashboard UI to generate/revoke API keys. Keys stored as nullbox:apikey:{hash}{ userId, scopes, createdAt } in Redis. Scopes: upload, read, revoke, admin. Pass as Authorization: Bearer <key>.

  • CLI: API key support nullbox config apikey <key> already scaffolded. Wire up to server auth.

  • CLI: binary releases GitHub Actions matrix build for linux/darwin/windows × amd64/arm64. Publish to GitHub Releases. Add install script.

  • CLI: natural language expiry --expires "next friday" using github.com/olebedev/when.

  • OpenAPI spec Auto-generate from route types, expose at /api/v1/openapi.json. Add Swagger UI at /api/docs.

  • File size UX fix Download page shows encrypted blob size. Show estimated plaintext size (encrypted - 12 byte IV - 16 byte GCM tag ≈ plaintext).


💡 Phase 4 — AI Features (SSO-gated, opt-in)

All AI features require AI_ENABLED=true in env and an active SSO session. Users explicitly opt in via account settings. No AI runs on unauthenticated uploads.

  • Smart defaults Suggest expiry/limit based on file type detected client-side. PDF → 7d/5dl · Archive → 3d/1dl · Image → 30d/∞

  • Secure share assistant Optional text field: "Who are you sending this to?" Claude API call → risk assessment + recommended settings.

  • Auto-description Generate a safe, non-identifying share message to use in place of filename in link previews and emails. Prevents metadata leakage in social previews.

  • Upload anomaly detection Analyze upload patterns without touching encrypted content: file size distribution, upload velocity, time-of-day, account age. Flag accounts for review. Model runs server-side, SSO users only.

  • CLI: context-aware suggestions nullbox upload --context "sending to client" → AI suggests --downloads 1 --days 3 --password.


Contributing

  1. Fork the repo
  2. Create a feature branch: git checkout -b feat/your-feature
  3. Follow the conventions in AGENTS.md
  4. Type check: npm run type-check
  5. Lint: npm run lint
  6. Open a PR with a clear description of what and why

License

MIT


zero-knowledge · open source · self-hostable

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published