Let me tell you why I’m writing this. For years, I struggled with microservices that communicated through direct API calls. It felt like managing a tangled web of dependencies. When one service failed, others would fall like dominoes. Then I discovered a different approach. Today, I want to show you how to build systems that talk through events, not direct calls. This method creates services that are independent, resilient, and much easier to scale. Let’s build something real together. Stick with me, and by the end, you’ll have a working event-driven system you can adapt for your own projects.
Picture this. You run an online store. A customer places an order. What happens next? The system needs to check inventory, process payment, send confirmation emails, and update logistics. If all these services talk directly to each other, a single failure can break the entire chain. There’s a better way. In an event-driven system, the order service simply announces, “An order was created.” Other services listen for this announcement and act independently. No direct calls. No waiting. Just a simple, powerful pattern that changes everything.
So, what exactly is an event? Think of it as a permanent record of something that happened. “Order #1234 was placed at 2:30 PM.” This fact never changes. Services that create these records are publishers. Services that react to them are consumers. They connect through a message broker, which in our case will be Redis Streams. This setup means services don’t need to know about each other. They only need to know about the events.
Ready to get your hands dirty? Let’s start with the foundation. We’ll define our events using Pydantic. This gives us automatic validation and serialization.
# event_models.py
from pydantic import BaseModel
from uuid import uuid4, UUID
from datetime import datetime
from typing import Optional
class OrderCreated(BaseModel):
event_id: UUID = uuid4()
order_id: UUID
customer_id: int
total_amount: float
items: list
timestamp: datetime = datetime.utcnow()
This simple model defines an order event. Notice the event_id and timestamp are created automatically. These are crucial for tracking and ordering events later.
Now, we need a place to send these events. This is where Redis Streams comes in. It’s not just a cache; it’s a powerful, persistent log. Unlike traditional queues, streams keep a history. Consumers can read at their own pace and even replay past events. Let’s create a publisher service with FastAPI.
# publisher.py
from fastapi import FastAPI, HTTPException
import redis.asyncio as redis
import json
app = FastAPI()
redis_client = redis.from_url("redis://localhost:6379", decode_responses=False)
@app.post("/order/")
async def create_order(order_data: dict):
event = OrderCreated(order_id=uuid4(), **order_data)
event_dict = event.dict()
event_dict["timestamp"] = event_dict["timestamp"].isoformat()
# Add event to the stream
await redis_client.xadd("orders_stream", {"data": json.dumps(event_dict)})
return {"message": "Order event published", "event_id": str(event.event_id)}
Our FastAPI app takes order data, packages it into an event, and publishes it to a Redis stream called orders_stream. The service’s job ends there. It doesn’t call an inventory service. It doesn’t call a payment service. It just announces the news.
Here’s a question: what happens if the payment service is down when the order is placed? In a traditional system, the order fails. In our system, the event sits safely in the Redis stream, waiting. When the payment service comes back online, it picks up right where it left off. This is the resilience we gain.
Now, we need consumers. Let’s build a payment processor that listens for those order events.
# payment_consumer.py
import asyncio
import redis.asyncio as redis
import json
from event_models import OrderCreated
async def process_payments():
redis_client = redis.from_url("redis://localhost:6379", decode_responses=True)
last_id = "0" # Start from the beginning of the stream
while True:
try:
# Read new events from the stream
results = await redis_client.xread({"payments_stream": last_id}, count=1, block=5000)
if results:
stream_name, messages = results[0]
message_id, message_data = messages[0]
last_id = message_id
event_data = json.loads(message_data["data"])
event = OrderCreated(**event_data)
# Simulate payment processing
print(f"Processing payment for order {event.order_id}")
await asyncio.sleep(1) # Simulate work
print(f"Payment completed for {event.order_id}")
except Exception as e:
print(f"Error in payment consumer: {e}")
await asyncio.sleep(5)
if __name__ == "__main__":
asyncio.run(process_payments())
This consumer runs in a loop, watching the payments_stream. The xread command with block=5000 makes it wait for new messages. When an event arrives, it processes the payment. The last_id variable is key. It remembers the last event processed, so if the service restarts, it doesn’t miss a beat. Do you see how the publisher and consumer are completely separate? They could be written in different languages, hosted on different continents.
But how do events get from the orders_stream to the payments_stream? We might want to transform or route events. This is where a simple processor comes in, often called a router or a connector.
# stream_router.py
async def route_orders_to_payments():
redis_client = redis.from_url("redis://localhost:6379")
last_id = "0"
while True:
results = await redis_client.xread({"orders_stream": last_id}, count=10)
for stream, messages in results:
for message_id, data in messages:
event_data = json.loads(data["data"])
# Route only if total amount > 0
if event_data.get("total_amount", 0) > 0:
await redis_client.xadd("payments_stream", {"data": data["data"]})
last_id = message_id
await asyncio.sleep(0.1)
This small service reads from the orders_stream and forwards relevant events to the payments_stream. You could add logic here to filter events, enrich them with new data, or split them into different streams. It’s a powerful pattern for managing complex workflows.
Of course, we want to run all these services easily. Docker Compose is our best friend here.
# docker-compose.yml
version: '3.8'
services:
redis:
image: redis:7-alpine
ports:
- "6379:6379"
order_publisher:
build: ./order_publisher
ports:
- "8000:8000"
depends_on:
- redis
payment_consumer:
build: ./payment_consumer
depends_on:
- redis
This file defines our Redis broker, our FastAPI publisher, and our payment consumer. With one command, docker-compose up, our entire event-driven ecosystem starts. Each service runs in its own container, isolated yet connected through Redis.
Now, think about a real application. What if a payment fails? We need a way to handle errors. A common pattern is the dead-letter queue. When a consumer fails to process a message multiple times, it moves that message to a separate stream for manual inspection.
# With error handling
max_retries = 3
retry_count = 0
while retry_count < max_retries:
try:
process_payment(event)
break # Success, exit retry loop
except PaymentGatewayError:
retry_count += 1
await asyncio.sleep(2 ** retry_count) # Exponential backoff
if retry_count == max_retries:
await redis_client.xadd("payments_dead_letter", {"data": data["data"], "error": "Max retries exceeded"})
This code tries to process a payment three times. If it keeps failing, it moves the event to a payments_dead_letter stream. An operator can later check this queue to see what went wrong. This prevents one bad event from blocking all others.
Building systems this way felt like a revelation to me. The loose coupling means teams can work independently. The scalability comes from adding more consumer instances. The resilience is built-in because events persist. It’s not just a technical choice; it’s a way to organize teams and workflows around business events.
The journey from a tangled web of API calls to a clean flow of events is incredibly rewarding. Start small. Model one key business event in your domain. Build a publisher and one consumer. See how it feels. You might just find it changes how you think about building software.
If this guide helped you connect the dots, please share it with a colleague who’s wrestling with microservice complexity. Have you tried an event-driven approach before? What was your biggest challenge? Let me know in the comments—I read every one and learn from your experiences.