Event-Driven Architecture for Financial Systems

Financial systems are inherently event-driven. A payment happens. A balance changes. A fraud alert triggers. Myles Ndlovu has built event-driven financial platforms and found that this architecture pattern aligns naturally with how money actually moves.
Why Events for Finance
Traditional request-response architecture works like this:
- Client sends a payment request
- Server validates, debits, credits, sends notification, updates analytics, logs audit trail
- Server returns response
Everything happens synchronously. If the notification service is slow, the entire payment is slow. If the analytics service is down, the payment fails — even though analytics has nothing to do with whether the payment should succeed.
Event-driven architecture decouples these concerns:
- Client sends a payment request
- Server validates, debits, credits, publishes a “PaymentCompleted” event
- Server returns response immediately
- Notification service consumes the event and sends a notification
- Analytics service consumes the event and updates dashboards
- Audit service consumes the event and writes the audit log
Each consumer operates independently. The payment succeeds regardless of downstream service health.
Core Concepts
Events
An event is a record of something that happened. It’s immutable — you don’t update events, you create new ones.
interface PaymentCompletedEvent {
eventId: string;
eventType: 'PaymentCompleted';
timestamp: string;
data: {
transactionId: string;
senderId: string;
recipientId: string;
amount: number;
currency: string;
reference: string;
};
} Event Bus
The infrastructure that routes events from producers to consumers. Options include:
- Apache Kafka: High-throughput, durable, ordered. Industry standard for financial systems.
- AWS SNS/SQS: Managed, simple, good for moderate scale.
- RabbitMQ: Flexible routing, good for complex workflows.
- Redis Streams: Lightweight, fast, good for simpler use cases.
For financial systems, Kafka is usually the right choice because of its durability guarantees and ordered processing.
Consumers
Services that react to events. Each consumer:
- Subscribes to specific event types
- Processes events at its own pace
- Maintains its own state
- Can be scaled independently
Event Sourcing
Event sourcing takes the event-driven approach further: instead of storing current state, you store the sequence of events that produced that state.
An account’s history is a series of events:
AccountOpened(balance: 0)
FundsDeposited(amount: 1000)
PaymentSent(amount: 200)
PaymentReceived(amount: 50) Current balance = replay all events: 0 + 1000 - 200 + 50 = 850
Benefits for Finance
- Complete audit trail: Every state change is recorded as an event. Regulators love this.
- Point-in-time queries: What was this account’s balance at 3pm yesterday? Replay events up to that timestamp.
- Debugging: Reproduce any bug by replaying the events that led to it.
- Compensation: Reversing a transaction is a new event, not a deletion. The history remains intact.
Trade-offs
- Storage: Storing every event uses more space than storing current state
- Complexity: Replaying events to derive state requires careful implementation
- Performance: Reading current state requires replaying events (solved with snapshots)
- Schema evolution: Changing event formats while maintaining backward compatibility is tricky
Practical Patterns
The Outbox Pattern
Problem: You need to update your database AND publish an event, atomically. If the database update succeeds but the event publish fails, your system is inconsistent.
Solution: Write the event to an “outbox” table in the same database transaction as the state change. A separate process reads the outbox and publishes events.
BEGIN TRANSACTION;
UPDATE accounts SET balance = balance - 100 WHERE id = 'sender';
UPDATE accounts SET balance = balance + 100 WHERE id = 'recipient';
INSERT INTO outbox (event_type, payload) VALUES ('PaymentCompleted', '...');
COMMIT; A background process polls the outbox and publishes events, then marks them as published.
Idempotent Consumers
Events might be delivered more than once (at-least-once delivery). Consumers must handle duplicates:
async function handlePaymentCompleted(event: PaymentCompletedEvent) {
const processed = await db.isEventProcessed(event.eventId);
if (processed) return; // Already handled
await processEvent(event);
await db.markEventProcessed(event.eventId);
} Dead Letter Queues
When a consumer fails to process an event after multiple retries, move it to a dead letter queue for manual investigation. Don’t let a single bad event block all subsequent events.
Monitoring Event-Driven Systems
Event-driven systems are harder to monitor than request-response systems because the flow is asynchronous and distributed.
Monitor:
- Consumer lag: How far behind is each consumer? If lag grows, the consumer can’t keep up.
- Event throughput: Events published per second, consumed per second.
- Processing errors: Failed event processing, dead letter queue depth.
- End-to-end latency: Time from event publication to all consumers completing.
When Not to Use Events
Not everything needs to be event-driven:
- Simple CRUD operations: A settings page doesn’t need events
- Synchronous requirements: If the caller needs an immediate, consistent response, synchronous is simpler
- Small systems: If you have one service, events add complexity without benefit
Use events when you have multiple consumers that need to react to the same trigger, or when you need to decouple systems for reliability and scalability. For financial platforms handling real money, that’s almost always the case.
Myles Ndlovu builds algorithmic trading engines, crypto platforms, and payment infrastructure for emerging markets. Read more about Myles or get in touch.