python

How Strawberry and DataLoader Supercharge GraphQL APIs in Python

Discover how Strawberry and DataLoader simplify GraphQL in Python with efficient data fetching and clean, scalable code.

How Strawberry and DataLoader Supercharge GraphQL APIs in Python

I’ve been building APIs for years, and I kept hitting the same wall. I’d create a REST endpoint that worked perfectly for one client app, only to watch another team need a slightly different data shape. We’d end up with versioned endpoints, over-fetching, or multiple network calls. It felt inefficient. Then I started working with GraphQL, and something clicked. The ability for a client to ask for exactly what it needed in a single request was transformative. But in Python, the experience often felt heavy, with lots of boilerplate code.

That’s why I got excited about Strawberry. It felt different—clean, modern, and built for how we write Python today. So, I dug in. I spent weeks reading documentation, experimenting, and seeing how it could solve real problems. I want to share that with you. Not as a dry lecture, but as a practical guide to building something that’s both powerful and pleasant to work with.

Think about a typical blog platform. You have users, posts, and comments. In a naive setup, fetching a list of posts with their author information could trigger a separate database query for each author. Ten posts? That’s eleven queries: one for the posts, then ten more for each author. This is the classic “N+1” problem. It kills performance. So, how do we stop it without making a mess of our code?

The answer is a pattern called DataLoader. It’s a batching and caching mechanism. Imagine a smart assistant for your database. Instead of you running to the kitchen for a spoon, then a fork, then a knife, you tell your assistant, “I need all the cutlery.” They make one trip. DataLoader works the same way. It collects all the individual requests for, say, user IDs within a single execution tick, batches them into one query, and then fans out the results.

Let’s build this. First, we define our types. With Strawberry, we use Python dataclasses with type hints. It feels natural.

import strawberry
from datetime import datetime

@strawberry.type
class User:
    id: int
    username: str
    email: str

@strawberry.type
class Post:
    id: int
    title: str
    content: str
    author: User
    created_at: datetime

See how the Post type has an author field of type User? This is where the relationship lives. Now, we need a way to resolve that author field efficiently. We create a DataLoader.

from aiodataloader import DataLoader

async def batch_get_users(db, keys):
    # `keys` is a list of user IDs: [1, 5, 8, ...]
    query = "SELECT id, username, email FROM users WHERE id = ANY($1)"
    records = await db.fetch(query, keys)
    # Map records back to the order of the keys
    user_map = {record['id']: record for record in records}
    return [user_map.get(key) for key in keys]

class UserLoader(DataLoader):
    def __init__(self, db):
        super().__init__(batch_load_fn=lambda keys: batch_get_users(db, keys))

The magic is in the batch_load_fn. The DataLoader automatically gathers all the load(id) calls, passes the pile of IDs to our function, and distributes the results. In our resolver, it becomes beautifully simple.

@strawberry.type
class Query:
    @strawberry.field
    async def posts(self, info, limit: int = 10) -> list[Post]:
        db = info.context["db"]
        # Fetch the post data
        post_records = await db.fetch("SELECT * FROM posts LIMIT $1", limit)
        
        posts = []
        for record in post_records:
            # This queues up the user fetch, doesn't execute it yet!
            author_promise = info.context["user_loader"].load(record["author_id"])
            
            posts.append(Post(
                id=record['id'],
                title=record['title'],
                content=record['content'],
                author=author_promise,  # We pass the promise
                created_at=record['created_at']
            ))
        # All user loads are batched and executed here
        return posts

We pass the promise of a user (the DataLoader’s future result) directly into the Post object. Strawberry knows how to wait for it. When the GraphQL engine resolves the final result, it asks for the author field on each post. The DataLoader has already done its work, batching all those requests. One query for all users, not ten.

But what about controlling who sees what? Not every field should be visible to every user. Strawberry has a great system for this using custom directives. You can mark a field as needing special permissions.

import strawberry
from strawberry.permission import BasePermission
from strawberry.types import Info

class IsAuthenticated(BasePermission):
    message = "User is not authenticated."

    def has_permission(self, source: Any, info: Info, **kwargs) -> bool:
        return info.context.get("current_user") is not None

@strawberry.type
class User:
    id: int
    username: str
    email: str = strawberry.field(permission_classes=[IsAuthenticated])

In this example, anyone can query for a user’s id and username, but to see the email field, you must be authenticated. The permission is checked at the field level, giving you fine-grained control. Have you considered how you’d structure permissions in your own app?

Let’s put it all together in a FastAPI application. The integration is smooth.

from fastapi import FastAPI
import strawberry
from strawberry.fastapi import GraphQLRouter
from app.dataloaders import UserLoader, PostLoader

async def get_context(db_session):
    return {
        "db": db_session,
        "user_loader": UserLoader(db_session),
        "post_loader": PostLoader(db_session),
    }

schema = strawberry.Schema(query=Query, mutation=Mutation)
graphql_app = GraphQLRouter(schema, context_getter=get_context)

app = FastAPI()
app.include_router(graphql_app, prefix="/graphql")

The context ensures our DataLoader instances are created fresh for each request, which is crucial for caching correctness. This setup gives you a robust, high-performance GraphQL endpoint. You get clear, type-safe schema definitions, efficient data fetching that avoids common pitfalls, and the tools to build secure, scalable APIs.

The shift from thinking in endpoints to thinking in a connected graph of data is powerful. It changes how frontend and backend teams collaborate. They can discuss the data needs precisely, without backend developers having to predict every possible use case upfront. It’s a more collaborative model.

I find this approach liberating. It lets me focus on modeling my business domain cleanly in Python and providing a flexible, efficient data layer. The combination of Strawberry’s clarity and the DataLoader’s smart optimization is hard to beat. It turns a complex performance problem into a manageable pattern.

Give it a try on your next project. Start with a simple type and a query. Add a DataLoader for your first relationship. You might be surprised by how quickly it comes together and how much cleaner your data-fetching logic becomes. I’d love to hear about your experience. Did this help you see GraphQL in Python differently? What was the first problem you solved with it? Share your thoughts in the comments below, and if you found this walk-through useful, please pass it along to another developer who might be wrestling with these same API challenges.


As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!


📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!


Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Keywords: graphql,python,strawberry,dataloader,api performance



Similar Posts
Blog Image
Build Production-Ready Event-Driven Microservices: FastAPI, RabbitMQ, MongoDB Complete Guide 2024

Build production-ready event-driven microservices with FastAPI, RabbitMQ & MongoDB. Complete tutorial covering async patterns, circuit breakers, and monitoring.

Blog Image
Production-Ready Distributed Task Queue: Celery, Redis, and FastAPI Complete Implementation Guide

Build a scalable distributed task queue with Celery, Redis & FastAPI. Complete production guide with worker setup, monitoring, error handling & optimization tips for high-performance systems.

Blog Image
Build High-Performance Real-Time WebSocket APIs with FastAPI, Redis Streams, and AsyncIO

Learn to build scalable real-time WebSocket APIs with FastAPI, Redis Streams & AsyncIO. Master connection management, message broadcasting & performance optimization techniques.

Blog Image
Build Complete OAuth 2.0 Authentication System: FastAPI, JWT, Redis Integration Guide

Learn to build a complete OAuth 2.0 authentication system with FastAPI, JWT, and Redis. Step-by-step guide covering security, session management, and testing. Start building now!

Blog Image
Complete Guide to Multi-Tenant SaaS Applications: FastAPI, SQLAlchemy, and PostgreSQL Row-Level Security

Learn to build secure multi-tenant SaaS apps with FastAPI, SQLAlchemy & PostgreSQL RLS. Complete guide with auth, migrations & deployment tips.

Blog Image
Build Production-Ready Background Tasks with Celery, Redis, and FastAPI: Complete Guide

Learn to build production-ready background task processing with Celery, Redis, and FastAPI. Complete guide with error handling, monitoring, and scaling.