Django Multi-Layer Caching with Redis and Memcached for Faster Performance
Learn Django multi-layer caching with Redis, Memcached, and query optimization to cut load times, reduce DB strain, and scale faster.
I remember the exact moment I realized my Django application needed a serious caching overhaul. We had just deployed a product listing page that pulled data from three different database tables, and under moderate traffic, response times jumped from 200 milliseconds to nearly four seconds. The database server was gasping. I knew we needed more than a single caching layer — we needed a strategy that intercepted requests at multiple levels, from the database all the way to the user’s browser. That’s when I started building a multi-layer caching system with Redis, Memcached, and careful database query caching.
Have you ever waited for a page to load for more than two seconds? That’s the moment most users leave. Caching is not just about speed; it’s about survival.
Let me walk you through the approach I used. The idea is simple: every layer in your application stack can cache something. The browser caches static assets. Django can cache entire pages or fragments. Redis can store computed results. Memcached can serve as a fast distributed cache for session data. And the ORM can cache query results internally. When you combine these, you build a resilient system that rarely touches the database.
I start with the lowest level: database query caching. Django’s ORM has built-in mechanisms like select_related and prefetch_related which are not exactly cache, but they reduce the number of queries. However, for repeated identical queries, I use the low-level cache API. Here’s a short example:
from django.core.cache import cache
from .models import Product
def get_featured_products():
products = cache.get('featured_products')
if not products:
products = Product.objects.filter(featured=True).select_related('category')
cache.set('featured_products', products, 600) # 10 minutes
return products
That’s the basic pattern. But in a multi-layer setup, you don’t stop there. I also use Redis as the default cache backend. Redis is fast, supports complex data structures, and can handle high concurrency. I configure it in Django’s settings like this:
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379/1',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'IGNORE_EXCEPTIONS': True,
},
'TIMEOUT': 300,
},
'memcached': {
'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache',
'LOCATION': '127.0.0.1:11211',
'TIMEOUT': 600,
},
}
Now I have two cache backends. Why two? Because they serve different purposes. Redis is my primary cache for expensive queries and session data. Memcached is a secondary distributed cache for less critical data that changes often, like user activity counters. If Redis goes down, Memcached can take over some load. And both are faster than hitting the database.
Why would you need two in-memory caches? Think of it like having two chefs in a kitchen. One handles the main course, the other prepares appetizers. If one chef gets sick, you still get something to eat.
I also implement per-view caching for pages that change rarely. For example, a homepage that shows static content. I use the @cache_page decorator with a timeout:
from django.views.decorators.cache import cache_page
@cache_page(60 * 15)
def homepage(request):
# heavy computation
return render(request, 'home.html', {'featured': get_featured_products()})
But whole-page caching has a problem: it caches for every user, including the request object. That’s fine for anonymous visitors, but authenticated users need personalization. So I use template fragment caching for parts of the page that are common:
{% load cache %}
{% cache 500 'sidebar_categories' %}
<ul>
{% for category in categories %}
<li>{{ category.name }}</li>
{% endfor %}
</ul>
{% endcache %}
This caches the sidebar for 500 seconds, regardless of the user. Fragments are faster because they don’t force a complete cache invalidation.
Now let’s talk about invalidation. The hardest part of caching is knowing when to clear old data. I use a combination of TTL (time-to-live) and signals. For example, when a Product is updated, I delete the cached featured products:
from django.db.models.signals import post_save
from django.dispatch import receiver
from .models import Product
@receiver(post_save, sender=Product)
def clear_product_cache(sender, instance, **kwargs):
cache.delete('featured_products')
cache.delete(f'product_detail_{instance.id}')
But signal-based invalidation can get messy. I prefer versioned caching. I store a version number in the cache, and increment it when data changes. Then all cache keys include that version:
def get_cache_version():
return cache.get('product_version', 1)
def get_product_detail(product_id):
version = get_cache_version()
key = f'product_detail_{product_id}_v{version}'
product = cache.get(key)
if not product:
product = Product.objects.get(id=product_id)
cache.set(key, product, 3600)
return product
def invalidate_product_cache():
try:
cache.incr('product_version')
except ValueError:
cache.set('product_version', 2)
That way I don’t need to delete individual keys. I just bump the version and all old keys become stale. It’s like restarting a clock.
Have you ever had a ghost product appear on your site because the cache wasn’t cleared? That’s a nightmare. Versioned caching eliminates that.
Another technique I use is cache stampede prevention. When a popular cache key expires, many concurrent requests might try to recompute it at the same time, overwhelming the database. I use a locking mechanism with Redis. Here’s a simple implementation:
import hashlib
import time
def get_cached_with_lock(key, func, timeout=300):
data = cache.get(key)
if data is not None:
return data
# try to acquire a lock
lock_key = f'{key}_lock'
if cache.add(lock_key, 1, timeout=10):
try:
data = func()
cache.set(key, data, timeout)
finally:
cache.delete(lock_key)
return data
else:
# another process is computing, wait and retry
time.sleep(0.1)
return get_cached_with_lock(key, func, timeout)
This ensures only one request fetches the data while others wait briefly. It’s like having a bouncer at a club entrance.
For session storage, I use Redis because it’s persistent and fast. In settings:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'default'
This stores sessions in Redis, so user login state is distributed across all app servers. Memcached can also store sessions, but I prefer Redis for its reliability and built-in expiration.
Now, about database query caching. The ORM’s QuerySet evaluates lazily, but once evaluated, it caches the result internally. That’s why chaining filters after evaluation doesn’t re-query. But this internal cache is per-request only. For cross-request caching, I rely on the low-level cache with Redis. However, Django’s QuerySet has a cache method in newer versions? Actually no, but you can wrap it. I often cache the result of a prefetched query:
def search_products(query):
key = f'search_{hashlib.md5(query.encode()).hexdigest()}'
results = cache.get(key)
if not results:
results = list(Product.objects.filter(name__icontains=query).prefetch_related('images'))
cache.set(key, results, 3600)
return results
The list() forces evaluation and the entire result list is cached. Be careful with large datasets, though.
I also use django-debug-toolbar to monitor query counts. Before caching, my home page made 47 queries. After implementing multi-layer caching, it made just 2. The difference in user experience was night and day.
So here’s the takeaway: start with the simplest cache – query caching for repeated calls, then add Redis for short-lived data, then Memcached for distributed needs, then page caching for static content, and finally fragment caching for dynamic parts. Combine with versioning and locks. Test with tools like Locust.
If you found this helpful, like and share it with your colleagues. And leave a comment – what caching strategy has saved your production app from crumbling? I’d love to hear about your biggest wins and failures. Let’s keep the conversation going.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva