A high-performance, production-ready Go microservice for handling orders. It consumes order data from Kafka, stores it in PostgreSQL with transactional integrity, caches it in-memory for fast retrieval, and serves it via a REST API and a modern Web UI.
- Event-Driven Architecture: Consumes orders from Kafka asynchronously.
- Robust Storage: Uses PostgreSQL with GORM.
- Transactional Integrity: Ensures atomicity when saving orders and items.
- Connection Retries: Resilient startup logic for database connections.
- High Performance:
- In-Memory Caching: Implements
go-cachewith TTL and automatic cleanup to prevent memory leaks.
- In-Memory Caching: Implements
- Reliability:
- Graceful Shutdown: Handles
SIGTERM/SIGINTto ensure in-flight requests and database operations complete safely. - Input Validation: Uses
validator/v10to ensure data integrity before processing.
- Graceful Shutdown: Handles
- Quality Assurance:
- Unit & Integration Tests: Comprehensive test coverage.
- Linting: strictly follows Go standards.
- User Interface: Modern, responsive Web UI to view order details.
- Language: Go 1.25+
- Database: PostgreSQL 15
- Message Broker: Apache Kafka + Zookeeper
- Libraries:
gorilla/mux: HTTP Routinggorm: ORM & Database Managementsarama: Kafka Clientgo-cache: In-memory Cachingvalidator: Struct Validationgofakeit: Realistic Data Generation (for testing)
├── cmd/
│ ├── server/ # Main application entry point
│ └── producer/ # Data generator for Kafka
├── internal/
│ ├── cache/ # In-memory caching layer
│ ├── config/ # Configuration management
│ ├── handlers/ # HTTP handlers
│ ├── kafka/ # Kafka consumer logic
│ └── repository/ # Database access layer
├── migrations/ # SQL migration files
├── web/ # Static frontend assets
├── tests/ # Integration tests
├── docker-compose.yml # Infrastructure (DB, Kafka, Zookeeper)
└── Makefile # Build and run commands
- Go 1.25+
- Docker & Docker Compose
- Make (optional, for convenience)
Start PostgreSQL, Kafka, and Zookeeper using Docker Compose:
docker-compose up -dThe service uses default configuration suitable for local development. You can customize it via .env file if needed (see .env.example).
Start the main API server and Kafka consumer:
make runThe server will start on port 8080.
Simulate incoming orders by running the producer script:
make producerThis sends random JSON order data to the Kafka topic.
Open your browser and navigate to:
Enter an Order ID (e.g., from the producer output) to view its details.
Run all unit and integration tests:
make testRun linters to ensure code quality:
make lint| Method | Endpoint | Description |
|---|---|---|
GET |
/ |
Serve Web UI |
GET |
/order/{id} |
Get Order JSON by ID |
- Fork the repository
- Create your feature branch (
git checkout -b feat/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feat/amazing-feature) - Open a Pull Request
Distributed under the MIT License.