A Node.js Express API with IP-based rate limiting and comprehensive audit logging.
This project implements a rate-limited API endpoint that tracks and restricts requests based on IP addresses. It includes real-time logging to files and periodic batch uploads to MongoDB for persistent storage and analytics.
- ✅ IP-based rate limiting (10 requests per 10 seconds)
- ✅ Real-time audit logging to file system
- ✅ Periodic batch upload of logs to MongoDB
- ✅ Express.js REST API
- ✅ Automated testing script included
- Node.js - Runtime environment
- Express.js (v5.2.1) - Web framework
- MongoDB (v7.0.0) - NoSQL database with official Node.js driver
- MongoDB Atlas - Cloud-hosted database service
- dotenv (v17.2.3) - Environment variable management
- File System (fs) - Node.js built-in module for local log buffering
- Nodemon - Development server with auto-restart
- Node.js (v14 or higher)
- MongoDB Atlas account (or local MongoDB instance)
- npm or yarn package manager
-
Clone the repository
git clone https://github.com/ihbadhon/Pimjo_backend_task.git cd Pimjo_Assignment -
Install dependencies
npm install
-
Configure environment variables
Create a
.envfile in the root directory with the following:PORT=5000 DB_USER=your_mongodb_username DB_PASS=your_mongodb_password
You can use this for testing purpose
DB_USER = Pimjo_Database
DB_PASS = IQ2Fyot5BNN00KPt
-
Start the server
npm start
The server will start on
http://localhost:5000 -
Test the rate limiting (optional)
In a new terminal:
node test-rate-limit.js
A simple action endpoint protected by rate limiting.
Request:
curl -X POST http://localhost:5000/api/action \
-H "Content-Type: application/json" \
-d '{"test": "data"}'Success Response (200):
{
"message": "Operation completed successfully"
}Rate Limit Exceeded Response (429):
{
"error": "Too many requests. Please try again later."
}- Window: 60 seconds (60,000 milliseconds)
- Limit: 10 requests per window
- Tracking Method: IP address-based
- Implementation: In-memory Map storage
- Each incoming request is tracked by the client's IP address
- The system maintains a sliding window of timestamps for each IP
- When a request arrives:
- Old timestamps outside the 60-second window are removed
- The current timestamp is added to the list
- If the count exceeds 10 requests, the request is blocked
- All requests (allowed and blocked) are logged for audit purposes
Time: 0s → Request 1-10: ✅ Allowed (200 OK)
Time: 5s → Request 11: ❌ Blocked (429 Too Many Requests)
Time: 61s → Request 12: ✅ Allowed (first request expired from window)
This project uses a two-tier storage strategy combining file-based buffering with MongoDB persistence:
- Purpose: Track request timestamps for rate limiting
- Why: Ultra-fast lookups (O(1)) for real-time rate limit checks
- Tradeoff: Data lost on server restart, but acceptable since rate limits reset anyway
- Purpose: Immediate audit log writes without blocking requests
- Location:
logs/audit.log - Why:
- Non-blocking I/O for fast request processing
- Resilient against database connection issues
- Acts as a buffer for batch uploads
- Tradeoff: Logs could be lost if server crashes before upload, mitigated by frequent uploads
- Purpose: Long-term storage and analytics
- Collection:
PimjoLogger.Logs - Upload Frequency: Every 60 seconds
- Why:
- Scalable for large audit log volumes
- Enables complex queries and analytics
- Cloud-based (Atlas) ensures high availability
- Document model fits JSON log structure perfectly
- Tradeoff: Network latency and cost, mitigated by batch uploads
| Requirement | Solution | Benefit |
|---|---|---|
| Fast rate limiting | In-memory Map | Sub-millisecond lookups |
| Request performance | File buffering | Non-blocking writes |
| Data persistence | MongoDB batch upload | Durable storage + analytics |
| Failure resilience | File backup system | Logs preserved during DB outages |
-
Single Server Deployment
- Rate limiting is per-server instance
- For distributed systems, would need Redis or similar shared storage
-
IP Address Reliability
- Assumes
req.ipaccurately represents the client - May need
trust proxysetting if behind load balancers
- Assumes
-
Log Retention
- No automatic log cleanup implemented
- MongoDB collection grows indefinitely (consider TTL indexes in production)
-
Network Reliability
- MongoDB Atlas connection assumed to be stable
- Temporary outages handled gracefully with file backup
- ✅ Chosen: In-memory Map
- ❌ Alternative: Redis or database storage
- Reason: Simplicity and speed for MVP; rate limit data is temporary anyway
- ✅ Chosen: File buffer + batch upload (60s interval)
- ❌ Alternative: Direct database writes per request
- Reason: Better performance; acceptable 60s delay for analytics
- ✅ Chosen: Sliding window
- ❌ Alternative: Fixed time buckets
- Reason: More accurate rate limiting, prevents burst at window edges
- ✅ Chosen: IP-based rate limiting
- ❌ Alternative: API keys or user authentication
- Reason: Simpler implementation; sufficient for public endpoints
- ✅ Chosen: MongoDB
- ❌ Alternative: PostgreSQL/MySQL
- Reason: JSON log structure fits document model; no relational queries needed
Pimjo_Assignment/
├── index.js # Main application entry point
├── package.json # Dependencies and scripts
├── test-rate-limit.js # Automated testing script
├── config/
│ └── env.js # Environment configuration loader
├── logs/
│ └── audit.log # Temporary audit log buffer
├── middleware/
│ └── MaxReqCheck.js # Rate limiting middleware
├── routes/
│ └── Router.js # API route definitions
├── services/
│ └── service.js # Audit log writing service
└── Utils/
└── logger.js # MongoDB log upload utility
The included test-rate-limit.js script sends 15 requests to test the rate limiting:
node test-rate-limit.jsExpected Result:
- First 10 requests:
200 OK - Next 5 requests:
429 Too Many Requests
- Location:
logs/audit.log - Format: NDJSON (newline-delimited JSON)
- Rotation: Every 60 seconds (uploaded then deleted)
- Database:
PimjoLogger - Collection:
Logs - Fields:
{ "ip": "::1", "endpoint": "/api/action", "timestamp": "2025-12-29T10:30:45.123Z", "status": "allowed" | "blocked" }