Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 86 additions & 0 deletions architectures/event-driven-architecture/data-flow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
---
description: Vibe coding guidelines for the asynchronous request and data flow lifecycle in an Event-Driven Architecture (EDA).
technology: Event-Driven Architecture
domain: Architecture
complexity: Architect
last_evolution: 2026-03-27
vibe_coding_ready: true
tags: [eda, data-flow, sequence-diagram, asynchronous, messaging, event-lifecycle]
topic: Event-Driven Data Flow
---

<div align="center">
# 📊 EDA Data Flow (Sequence Blueprint)
</div>

---

This document illustrates the execution lifecycle of a distributed, asynchronous event-driven system. It defines the path an initial synchronous request takes as it propagates across independent microservices via a message broker.

## Mental Model & Asynchronous Lifecycle

The architectural contract is simple:
- The **Ingress Gateway (API)** accepts the synchronous HTTP request from the User.
- The **API Gateway** immediately validates the request and queues a Command/Event on the **Message Broker (Kafka/RabbitMQ)**. It returns HTTP 202 Accepted.
- Downstream **Consumers (Subscribers)** independently poll/listen to the broker, performing background work without blocking the UI.
- Finally, the UI relies on WebSocket, Server-Sent Events (SSE), or polling for real-time completion status.

> [!IMPORTANT]
> **Data Flow Constraint:** A microservice handling an event MUST NOT synchronously invoke another microservice. It must process the event, update its localized database, and optionally emit a subsequent domain event.

### Sequence Diagram: Distributed E-Commerce Checkout

```mermaid
sequenceDiagram
autonumber
actor Client as User (Frontend)
participant API as API Gateway (REST)
participant Broker as Message Broker (Kafka)
participant OrderSvc as Order Service
participant PaySvc as Payment Service
participant NotifySvc as Notification Service

Client->>API: POST /checkout (Cart DTO)
API-->>Broker: Publish [CheckoutInitiatedEvent]
API-->>Client: HTTP 202 Accepted (Order Pending)

Broker-->>OrderSvc: Consume [CheckoutInitiatedEvent]
OrderSvc->>OrderSvc: Create Pending Order (DB)
OrderSvc-->>Broker: Publish [OrderCreatedEvent]

Broker-->>PaySvc: Consume [OrderCreatedEvent]
PaySvc->>PaySvc: Process Stripe Payment

alt Payment Success
PaySvc-->>Broker: Publish [PaymentSucceededEvent]
Broker-->>OrderSvc: Consume [PaymentSucceededEvent]
OrderSvc->>OrderSvc: Update Order Status -> Confirmed
Broker-->>NotifySvc: Consume [PaymentSucceededEvent]
NotifySvc->>Client: Push Notification / Email (Success)
else Payment Failure
PaySvc-->>Broker: Publish [PaymentFailedEvent]
Broker-->>OrderSvc: Consume [PaymentFailedEvent]
OrderSvc->>OrderSvc: Update Order Status -> Failed
Broker-->>NotifySvc: Consume [PaymentFailedEvent]
NotifySvc->>Client: Push Notification / Email (Failure)
end
```

---

## The Outbox Pattern (Reliable Publishing)

To ensure dual-write safety (saving state in the local DB and publishing the event to Kafka simultaneously), EDA relies on the **Transactional Outbox Pattern**.

1. The service begins a local DB transaction.
2. The service saves business entity data (e.g., `orders` table).
3. The service inserts an event record in an `outbox` table in the SAME transaction.
4. The service commits the transaction.
5. A background process (e.g., Debezium, CDC) reads the `outbox` table and publishes the messages to Kafka, ensuring "at-least-once" delivery.

---

<div align="center">
[Back to Main Blueprint](./readme.md) <br><br>
<b>Master the event lifecycle to prevent distributed monoliths! 🌊</b>
</div>
94 changes: 94 additions & 0 deletions architectures/event-driven-architecture/folder-structure.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
---
description: Vibe coding guidelines for the folder structure and structural hierarchy of Event-Driven Architecture (EDA) projects.
technology: Event-Driven Architecture
domain: Architecture
complexity: Architect
last_evolution: 2026-03-27
vibe_coding_ready: true
tags: [eda, folder-structure, architecture-hierarchy, backend, microservices]
topic: Event-Driven Folder Structure
---

<div align="center">
# 📁 EDA Folder Structure (Hierarchy Blueprint)
</div>

---

This document outlines the optimal 2026-grade folder structure for an Event-Driven microservice (or bounded context). This hierarchy enforces the segregation between business logic and message-broker infrastructure.

## Folder Hierarchy (Mental Model)

A robust EDA microservice separates its core domain from its external adapters (Publishers and Subscribers). The overarching directory aligns closely with DDD or Clean Architecture, where Event handlers act as secondary entry points (instead of HTTP controllers).

> [!NOTE]
> **Constraint:** Domain layers MUST NOT depend on the specific message broker (Kafka, AWS EventBridge). Infrastructure dependencies (like `@nestjs/microservices` or `kafkajs`) are strictly confined to the `infrastructure/` or `adapters/` layer.

### System Diagram: Layered Hierarchy

```mermaid
graph TD
Root[Microservice Root] --> Domain[core/domain]
Root --> App[core/application]
Root --> Infra[infrastructure]

Infra --> DB[database (Adapters)]
Infra --> Msg[messaging (Broker Integrations)]

Msg --> Pub[publishers (Producers)]
Msg --> Sub[subscribers (Consumers)]
Msg --> Config[kafka-config]

App --> Handlers[Command/Event Handlers]
Handlers -.-> Pub

%% Apply strict styling tokens
classDef default fill:#e1f5fe,stroke:#03a9f4,stroke-width:2px,color:#000;
classDef component fill:#e8f5e9,stroke:#4caf50,stroke-width:2px,color:#000;
classDef layout fill:#f3e5f5,stroke:#9c27b0,stroke-width:2px,color:#000;

class Root layout;
class Domain,App,Handlers component;
class Infra,DB,Msg,Pub,Sub,Config default;
```

---

## Detailed Directory Tree

```text
src/
├── 📁 core/ # Pure business logic (No infra dependencies)
│ ├── 📁 domain/ # Aggregates, Value Objects, Domain Events
│ │ ├── events/ # Internal domain event types (e.g., OrderCreated)
│ │ └── models/ # Business entities
│ └── 📁 application/ # Use cases orchestration
│ ├── commands/ # Sync logic executed before emitting events
│ └── handlers/ # Logic that responds to consumed events
├── 📁 infrastructure/ # Framework and Broker integrations
│ ├── 📁 messaging/ # The Event-Driven core
│ │ ├── 📁 config/ # Kafka client configuration, schemas
│ │ ├── 📁 publishers/ # Outbound adapters (Emit events to Broker)
│ │ │ └── OrderPublisher.ts # Implements IEventPublisher from Core
│ │ ├── 📁 subscribers/ # Inbound adapters (Listen to Broker queues)
│ │ │ └── PaymentConsumer.ts # Routes Kafka messages to Application handlers
│ │ └── 📁 schemas/ # AsyncAPI/Avro/Protobuf message schemas
│ └── 📁 database/ # DB adapters (Repositories, Outbox pattern)
└── main.ts # Application bootstrap (Starts HTTP + Consumers)
```

### Explanation of Key Directories

1. **`core/domain/events/`**: These are internal representations of an event. They are purely business-focused (e.g., `OrderPlacedDomainEvent`). They know nothing about Kafka serialization.
2. **`infrastructure/messaging/publishers/`**: This directory contains implementations of your output ports. It serializes the internal domain event into a payload (JSON/Avro) and publishes it to the external topic.
3. **`infrastructure/messaging/subscribers/`**: This directory acts exactly like HTTP Controllers. A consumer listens to a Kafka topic, deserializes the message, and hands it off to a `core/application/handlers/` class to perform the actual business logic.
4. **`infrastructure/messaging/schemas/`**: Strongly-typed schemas (like Protobuf or Avro) defining the contract for events passing through the broker.

---

<div align="center">
[Back to Main Blueprint](./readme.md) <br><br>
<b>A clean directory tree prevents tightly-coupled broker dependencies! 📁</b>
</div>
180 changes: 180 additions & 0 deletions architectures/event-driven-architecture/implementation-guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
---
description: Vibe coding implementation guidelines, strict rules, code patterns, and constraints for implementing Event-Driven Architecture (EDA) using 2026 standards.
technology: Event-Driven Architecture
domain: Architecture
complexity: Architect
last_evolution: 2026-03-27
vibe_coding_ready: true
tags: [eda, implementation-guide, kafka, microservices, typescript, nestjs, architecture-patterns]
topic: Event-Driven Implementation Guide
---

<div align="center">
# 🛠️ EDA Implementation Guide (Code Blueprint)
</div>

---

This blueprint details strict coding patterns and anti-patterns for implementing Event-Driven Architecture, ensuring "at-least-once" delivery, schema registry compliance, and robust idempotency.

> [!IMPORTANT]
> **Implementation Contract:** All code must adhere to 2026 modern backend standards (Node.js 24+, TypeScript 5.5+, strict types, decorators, or class-based dependency injection). Services must integrate safely with message brokers (Kafka) without tightly coupling business logic.

## Entity & Handler Relationships

```mermaid
classDiagram
class DomainEvent {
+String eventId
+String aggregateId
+Date occurredOn
+Object payload
}
class EventPublisher {
<<interface>>
+publish(DomainEvent) void
}
class KafkaAdapter {
-Producer producer
+publish(DomainEvent) void
}
class EventHandler {
+handle(DomainEvent) void
}

EventPublisher <|-- KafkaAdapter
DomainEvent <-- EventHandler
DomainEvent <-- EventPublisher
```

---

## 1. Idempotent Consumers (Crucial)

Because Kafka or RabbitMQ may deliver the same message twice (e.g., during a consumer rebalance), handlers must be purely idempotent. Processing the exact same `eventId` twice MUST NOT duplicate the business outcome (e.g., charging a credit card twice).

### ❌ Bad Practice
```typescript
class PaymentEventHandler {
async handle(event: OrderCreatedEvent) {
// ❌ Blindly processing the payment every time the event is received!
// A duplicate Kafka message will charge the user again.
await this.stripeService.charge(event.payload.amount);
await this.db.payments.insert({ orderId: event.aggregateId, status: 'PAID' });
}
}
```

### ✅ Best Practice
```typescript
class PaymentEventHandler {
async handle(event: OrderCreatedEvent) {
// ✅ 1. Check if we've already processed this specific event ID
const alreadyProcessed = await this.db.processedEvents.exists(event.eventId);
if (alreadyProcessed) {
this.logger.warn(`Event ${event.eventId} already processed. Skipping.`);
return;
}

// ✅ 2. Execute business logic idempotently
await this.db.transaction(async (tx) => {
await this.stripeService.charge(event.payload.amount);
await tx.payments.insert({ orderId: event.aggregateId, status: 'PAID' });

// ✅ 3. Record the event ID to prevent duplicate processing
await tx.processedEvents.insert({ id: event.eventId, processedAt: new Date() });
});
}
}
```

---

## 2. The Transactional Outbox Pattern

To solve the "Dual-Write Problem" (saving state to the DB and publishing to Kafka reliably), we use an Outbox table. If the application crashes after saving to the DB but before publishing to Kafka, the message is permanently lost.

### ❌ Bad Practice
```typescript
class OrderService {
async createOrder(data: CreateOrderDto) {
// ❌ Dual-write problem!
const order = await this.db.orders.insert(data); // 1. Save to DB

// If the server crashes HERE, the event is never published,
// and downstream services never know the order was created.

await this.kafkaProducer.send('orders.created', order); // 2. Publish to Broker
}
}
```

### ✅ Best Practice
```typescript
class OrderService {
async createOrder(data: CreateOrderDto) {
// ✅ The Outbox Pattern: Save BOTH the business entity and the event
// in the exact same ACID database transaction.
await this.db.transaction(async (tx) => {
const order = await tx.orders.insert(data);

const outboxEvent = {
aggregateType: 'Order',
aggregateId: order.id,
eventType: 'OrderCreated',
payload: JSON.stringify(order),
createdAt: new Date(),
};

await tx.outbox.insert(outboxEvent); // Saves strictly to a local DB table
});

// A separate background process (e.g., Debezium or a Polling Worker)
// reads the 'outbox' table and safely publishes to Kafka.
}
}
```

---

## 3. Strictly Typed Schemas (Schema Registry)

Microservices evolve independently. If a publisher changes the shape of a JSON event payload, all downstream subscribers will break. Always enforce a Schema Registry (Avro, Protobuf, JSON Schema) for all events.

### ✅ Best Practice (Avro Example)
```typescript
// 1. Define a strict Avro schema for the event
const orderCreatedSchema = {
type: 'record',
name: 'OrderCreated',
fields: [
{ name: 'orderId', type: 'string' },
{ name: 'amount', type: 'double' },
{ name: 'customerId', type: 'string' }
// Enforces backward compatibility rules via Confluent Schema Registry
]
};

class OrderKafkaPublisher {
async publish(event: DomainEvent) {
// 2. The payload is validated and serialized against the Schema Registry
// before it ever reaches the Kafka topic.
const encodedPayload = await this.schemaRegistry.encode(
'orders.created-value',
event.payload
);

await this.producer.send({
topic: 'orders.created',
messages: [{ key: event.aggregateId, value: encodedPayload }]
});
}
}
```

---

<div align="center">
[Back to Main Blueprint](./readme.md) <br><br>
<b>Master these implementation constraints to guarantee asynchronous consistency! 🛠️</b>
</div>
Loading
Loading