Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,65 @@ NOTION_CLI_CACHE_PAGE_TTL=60000
# Blocks are most dynamic, use shortest TTL
NOTION_CLI_CACHE_BLOCK_TTL=30000

# ============================================
# Performance Optimizations
# ============================================

# Enable request deduplication to prevent duplicate concurrent API calls
# Default: true
# Set to false to disable deduplication
NOTION_CLI_DEDUP_ENABLED=true

# Block deletion concurrency (when updating pages)
# Default: 5
# Higher values = faster but more API load
NOTION_CLI_DELETE_CONCURRENCY=5

# Child block fetching concurrency (when retrieving pages recursively)
# Default: 10
# Higher values = faster but more API load
NOTION_CLI_CHILDREN_CONCURRENCY=10

# Enable persistent disk cache
# Default: true
# Set to false to disable disk caching (memory cache only)
NOTION_CLI_DISK_CACHE_ENABLED=true

# Maximum disk cache size in bytes
# Default: 104857600 (100MB)
# Cache will automatically remove oldest entries when limit is reached
NOTION_CLI_DISK_CACHE_MAX_SIZE=104857600

# Disk cache sync interval in milliseconds
# Default: 5000 (5 seconds)
# How often to flush dirty cache entries to disk
NOTION_CLI_DISK_CACHE_SYNC_INTERVAL=5000

# Enable HTTP keep-alive for connection reuse
# Default: true
# Set to false to disable keep-alive
NOTION_CLI_HTTP_KEEP_ALIVE=true

# Keep-alive timeout in milliseconds
# Default: 60000 (60 seconds)
# How long to keep idle connections open
NOTION_CLI_HTTP_KEEP_ALIVE_MS=60000

# Maximum concurrent connections
# Default: 50
# Higher values allow more parallel requests
NOTION_CLI_HTTP_MAX_SOCKETS=50

# Maximum pooled idle connections
# Default: 10
# Connections kept open for reuse
NOTION_CLI_HTTP_MAX_FREE_SOCKETS=10

# Request timeout in milliseconds
# Default: 30000 (30 seconds)
# How long to wait for a response
NOTION_CLI_HTTP_TIMEOUT=30000

# ============================================
# Debug Configuration
# ============================================
Expand Down
127 changes: 127 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,133 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

## [5.9.0] - 2026-02-05

### Added
- **Request deduplication** - Prevents duplicate concurrent API calls for the same resource
- Automatic deduplication of in-flight requests using promise memoization
- Statistics tracking for hits/misses/pending requests
- Configurable via `NOTION_CLI_DEDUP_ENABLED` environment variable
- Integrated with `cachedFetch()` for seamless API call optimization
- Expected 30-50% reduction in duplicate API calls
- **Parallel operations** - Execute bulk operations concurrently for faster performance
- Block deletion in `updatePage()` now runs in parallel (configurable concurrency)
- Child block fetching in `retrievePageRecursive()` now runs in parallel
- Configurable via `NOTION_CLI_DELETE_CONCURRENCY` (default: 5) and `NOTION_CLI_CHILDREN_CONCURRENCY` (default: 10)
- Expected 60-80% faster bulk operations
- **Persistent disk cache** - Maintains cache across CLI invocations
- Cache entries stored in `~/.notion-cli/cache/` directory
- Automatic max size enforcement (default: 100MB)
- Atomic writes prevent corruption
- Configurable via `NOTION_CLI_DISK_CACHE_ENABLED` and `NOTION_CLI_DISK_CACHE_MAX_SIZE`
- Expected 40-60% improved cache hit rate
- **HTTP keep-alive and connection pooling** - Reduces connection overhead
- Reuses HTTPS connections across multiple requests
- Configurable connection pool size (default: 10 free sockets)
- Configurable max concurrent connections (default: 50 sockets)
- Keep-alive timeout configurable (default: 60 seconds)
- Automatic cleanup on command exit
- Expected 10-20% latency improvement
- **Response compression** - Reduces bandwidth usage
- Automatic gzip, deflate, and brotli compression support
- Accept-Encoding headers added to all API requests
- Server automatically compresses responses when supported
- Client automatically decompresses responses
- Expected 60-70% bandwidth reduction for JSON payloads

### Performance
- Request deduplication reduces unnecessary API calls when multiple concurrent requests target the same resource
- Parallel execution of bulk operations significantly reduces total operation time
- Page updates with many blocks complete 60-80% faster
- Recursive page retrieval with many child blocks completes 60-80% faster
- Persistent disk cache maintains cache across CLI invocations
- Subsequent CLI runs benefit from cached data (40-60% improved hit rate)
- Cache survives process restarts and system reboots
- Automatic cleanup of expired entries
- HTTP keep-alive reduces connection overhead
- Connection reuse eliminates TLS handshake for subsequent requests
- 10-20% latency improvement for multi-request operations
- Configurable pool sizes for different workload patterns
- Response compression reduces bandwidth usage
- JSON responses compressed by 60-70% (typical)
- Faster data transfer, especially on slow connections
- Lower bandwidth costs and network usage
- Automatic compression/decompression handled by HTTP client

### Breaking Changes

**None** - All performance optimizations are backward compatible and can be independently disabled via environment variables.

### Technical Details

- **121 new tests** added across 5 test suites with comprehensive coverage
- Deduplication: 22 tests (94.73% coverage)
- Parallel Operations: 21 tests (timing benchmarks included)
- Disk Cache: 34 tests (83.59% coverage)
- HTTP Agent: 26 tests (78.94% coverage)
- Compression: 18 tests (header validation)
- **Zero new dependencies** - All optimizations use Node.js built-in features
- **Production-ready** - Comprehensive error handling with graceful degradation
- **Lifecycle management** - Proper initialization in `BaseCommand.init()` and cleanup in `BaseCommand.finally()`

### Configuration

All optimizations are configurable via environment variables. See `.env.example` for complete configuration guide.

**Request Deduplication:**
- `NOTION_CLI_DEDUP_ENABLED` (default: true)

**Parallel Operations:**
- `NOTION_CLI_DELETE_CONCURRENCY` (default: 5)
- `NOTION_CLI_CHILDREN_CONCURRENCY` (default: 10)

**Persistent Disk Cache:**
- `NOTION_CLI_DISK_CACHE_ENABLED` (default: true)
- `NOTION_CLI_DISK_CACHE_MAX_SIZE` (default: 104857600 / 100MB)
- `NOTION_CLI_DISK_CACHE_SYNC_INTERVAL` (default: 5000ms)

**HTTP Keep-Alive:**
- `NOTION_CLI_HTTP_KEEP_ALIVE` (default: true)
- `NOTION_CLI_HTTP_KEEP_ALIVE_MS` (default: 60000ms)
- `NOTION_CLI_HTTP_MAX_SOCKETS` (default: 50)
- `NOTION_CLI_HTTP_MAX_FREE_SOCKETS` (default: 10)
- `NOTION_CLI_HTTP_TIMEOUT` (default: 30000ms)

**Response Compression:**
- Always enabled (no configuration needed)

### Migration Guide

**Upgrading from 5.8.0:**

1. **No code changes required** - All optimizations work automatically
2. **Default settings are optimal** for most use cases
3. **To customize performance**, create a `.env` file with desired settings
4. **To disable specific optimizations**, set corresponding `_ENABLED` flag to `false`
5. **For batch operations**, consider increasing concurrency limits
6. **For memory-constrained environments**, reduce cache sizes

Example `.env` for high-throughput batch processing:
```bash
NOTION_CLI_DELETE_CONCURRENCY=10
NOTION_CLI_CHILDREN_CONCURRENCY=20
NOTION_CLI_HTTP_MAX_SOCKETS=50
NOTION_CLI_DISK_CACHE_MAX_SIZE=104857600
```

### Performance Summary

**Overall improvement: 1.5-2x for batch operations and repeated data access**

Individual phase improvements:
- Request deduplication: 5-15% typical (30-50% best case with concurrent duplicates)
- Parallel operations: 60-70% typical (80% best case for large batches)
- Disk cache: 20-30% improvement across sessions (60% best case with heavy reuse)
- HTTP keep-alive: 5-10% typical (10-20% best case for multi-request operations)
- Response compression: Bandwidth reduction varies (compression already handled by modern APIs)

See [README.md Performance Optimizations](./README.md#-performance-optimizations-v590) for detailed documentation.

## [5.8.0] - 2026-02-04

### Changed
Expand Down
Loading
Loading