Giter VIP home page Giter VIP logo

fastapi-template's Introduction

Fast FastAPI boilerplate

Yet another template to speed your FastAPI development up.

Blue Rocket with FastAPI Logo as its window. There is a word FAST written

Python FastAPI Pydantic PostgreSQL Redis Docker

0. About

FastAPI boilerplate creates an extendable async API using FastAPI, Pydantic V2, SQLAlchemy 2.0 and PostgreSQL:

  • FastAPI: modern Python web framework for building APIs
  • Pydantic V2: the most widely used data Python validation library, rewritten in Rust (5x-50x faster)
  • SQLAlchemy 2.0: Python SQL toolkit and Object Relational Mapper
  • PostgreSQL: The World's Most Advanced Open Source Relational Database
  • Redis: Open source, in-memory data store used by millions as a cache, message broker and more.
  • ARQ Job queues and RPC in python with asyncio and redis.
  • Docker Compose With a single command, create and start all the services from your configuration.

1. Features

  • โšก๏ธ Fully async
  • ๐Ÿš€ Pydantic V2 and SQLAlchemy 2.0
  • ๐Ÿ” User authentication with JWT
  • ๐Ÿฌ Easy redis caching
  • ๐Ÿ‘œ Easy client-side caching
  • ๐Ÿšฆ ARQ integration for task queue
  • โš™๏ธ Efficient querying (only queries what's needed)
  • โŽ˜ Out of the box pagination support
  • ๐Ÿ›‘ Rate Limiter dependency
  • ๐Ÿ‘ฎ FastAPI docs behind authentication and hidden based on the environment
  • ๐Ÿฆพ Easily extendable
  • ๐Ÿคธโ€โ™‚๏ธ Flexible
  • ๐Ÿšš Easy running with docker compose

2. Contents

  1. About
  2. Features
    1. To Do
  3. Contents
  4. Prerequisites
    1. Environment Variables (.env)
    2. Docker Compose
    3. From Scratch
  5. Usage
    1. Docker Compose
    2. From Scratch
      1. Packages
      2. Running PostgreSQL With Docker
      3. Running Redis with Docker
      4. Running the API
    3. Creating the first superuser
    4. Database Migrations
  6. Extending
    1. Project Structure
    2. Database Model
    3. SQLAlchemy Models
    4. Pydantic Schemas
    5. Alembic Migrations
    6. CRUD
    7. Routes
      1. Paginated Responses
    8. Caching
    9. More Advanced Caching
    10. ARQ Job Queues
    11. Rate Limiting
    12. Running
  7. Running in Production
  8. Testing
  9. Contributing
  10. References
  11. License
  12. Contact

3. Prerequisites

Start by using the template, and naming the repository to what you want.

clicking use this template button, then create a new repository option

Then clone your created repository (I'm using the base for the example)

git clone https://github.com/igormagalhaesr/FastAPI-boilerplate

3.1 Environment Variables (.env)

And create a ".env" file:

Then create a .env file:

touch .env

Inside of .env, create the following app settings variables:

# ------------- app settings ------------- 
APP_NAME="Your app name here"
APP_DESCRIPTION="Your app description here"
APP_VERSION="0.1"
CONTACT_NAME="Your name"
CONTACT_EMAIL="Your email"
LICENSE_NAME="The license you picked"

For the database (if you don't have a database yet, click here), create:

# ------------- database -------------
POSTGRES_USER="your_postgres_user"
POSTGRES_PASSWORD="your_password"
POSTGRES_SERVER="your_server" # default localhost
POSTGRES_PORT=5432 
POSTGRES_DB="your_db"

For crypt: Start by running

openssl rand -hex 32

And then create in .env:

# ------------- crypt -------------
SECRET_KEY= # result of openssl rand -hex 32
ALGORITHM= # pick an algorithm, default HS256
ACCESS_TOKEN_EXPIRE_MINUTES= # minutes until token expires, default 30

Then for the first admin user:

# ------------- admin -------------
ADMIN_NAME="your_name"
ADMIN_EMAIL="your_email"
ADMIN_USERNAME="your_username"
ADMIN_PASSWORD="your_password"

For redis caching:

# ------------- redis -------------
REDIS_CACHE_HOST="your_host" # default "localhost", if using docker compose you should user "redis"
REDIS_CACHE_PORT=6379

And for client-side caching:

# ------------- redis cache -------------
REDIS_CACHE_HOST="your_host" # default "localhost", if using docker compose you should user "redis"
REDIS_CACHE_PORT=6379

For ARQ Job Queues:

# ------------- redis queue -------------
REDIS_CACHE_HOST="your_host" # default "localhost", if using docker compose you should use "db"
REDIS_CACHE_PORT=6379

Warning You may use the same redis for both caching and queue while developing, but the recommendation is using two separate containers for production.

To create the first tier:

# ------------- first tier -------------
TIER_NAME="free"

For the rate limiter:

# ------------- redis rate limit -------------
REDIS_RATE_LIMIT_HOST="localhost"   # default="localhost"
REDIS_RATE_LIMIT_PORT=6379          # default=6379


# ------------- default rate limit settings -------------
DEFAULT_RATE_LIMIT_LIMIT=10         # default=10
DEFAULT_RATE_LIMIT_PERIOD=3600      # default=3600

For tests (optional to run):

# ------------- test -------------
TEST_NAME="Tester User"
TEST_EMAIL="[email protected]"
TEST_USERNAME="testeruser"
TEST_PASSWORD="Str1ng$t"

And Finally the environment:

# ------------- environment -------------
ENVIRONMENT="local"

ENVIRONMENT can be one of local, staging and production, defaults to local, and changes the behavior of api docs endpoints:

  • local: /docs, /redoc and /openapi.json available
  • staging: /docs, /redoc and /openapi.json available for superusers
  • production: /docs, /redoc and /openapi.json not available

3.2 Docker Compose (preferred)

To do it using docker compose, ensure you have docker and docker compose installed, then: While in the base project directory (FastAPI-boilerplate here), run:

docker compose up

You should have a web container, postgres container, a worker container and a redis container running.
Then head to http://127.0.0.1:8000/docs.

3.3 From Scratch

Install poetry:

pip install poetry

4. Usage

4.1 Docker Compose

If you used docker compose, your setup is done. You just need to ensure that when you run (while in the base folder):

docker compose up

You get the following outputs (in addition to many other outputs):

fastapi-boilerplate-worker-1  | ... redis_version=x.x.x mem_usage=999K clients_connected=1 db_keys=0
...
fastapi-boilerplate-db-1      | ... [1] LOG:  database system is ready to accept connections
...
fastapi-boilerplate-web-1     | INFO:     Application startup complete.

So you may skip to 5. Extending.

4.2 From Scratch

4.2.1. Packages

In the src directory, run to install required packages:

poetry install

Ensuring it ran without any problem.

4.2.2. Running PostgreSQL With Docker

Note

If you already have a PostgreSQL running, you may skip this step.

Install docker if you don't have it yet, then run:

docker pull postgres

And pick the port, name, user and password, replacing the fields:

docker run -d \
    -p {PORT}:{PORT} \
    --name {NAME} \
    -e POSTGRES_PASSWORD={PASSWORD} \
    -e POSTGRES_USER={USER} \
    postgres

Such as:

docker run -d \
    -p 5432:5432 \
    --name postgres \
    -e POSTGRES_PASSWORD=1234 \
    -e POSTGRES_USER=postgres \
    postgres

4.2.3. Running redis With Docker

Note

If you already have a redis running, you may skip this step.

Install docker if you don't have it yet, then run:

docker pull redis:alpine

And pick the name and port, replacing the fields:

docker run -d \
  --name {NAME}  \
  -p {PORT}:{PORT} \
redis:alpine

Such as

docker run -d \
  --name redis  \
  -p 6379:6379 \
redis:alpine

4.2.4. Running the API

While in the src folder, run to start the application with uvicorn server:

poetry run uvicorn app.main:app --reload

Tip

The --reload flag enables auto-reload once you change (and save) something in the project

4.3 Creating the first superuser

4.3.1 Docker Compose

Warning

Make sure DB and tables are created before running create_superuser (db should be running and the api should run at least once before)

If you are using docker compose, you should uncomment this part of the docker-compose.yml:

  # #-------- uncomment to create first superuser --------
  # create_superuser:
  #   build: 
  #     context: .
  #     dockerfile: Dockerfile
  #   env_file:
  #     - ./src/.env
  #   depends_on:
  #     - db
  #   command: python -m src.scripts.create_first_superuser
  #   volumes:
  #     - ./src:/code/src

Getting:

  #-------- uncomment to create first superuser --------
  create_superuser:
    build: 
      context: .
      dockerfile: Dockerfile
    env_file:
      - ./src/.env
    depends_on:
      - db
    command: python -m src.scripts.create_first_superuser
    volumes:
      - ./src:/code/src

While in the base project folder run to start the services:

docker-compose up -d

It will automatically run the create_superuser script as well, but if you want to rerun eventually:

docker-compose run --rm create_superuser

to stop the create_superuser service:

docker-compose stop create_superuser

4.3.2 From Scratch

While in the src folder, run (after you started the application at least once to create the tables):

poetry run python -m scripts.create_first_superuser

4.3.3 Creating the first tier

Warning

Make sure DB and tables are created before running create_tier (db should be running and the api should run at least once before)

To create the first tier it's similar, you just replace create_superuser for create_tier service or create_first_superuser to create_first_tier for scripts. If using docker compose, do not forget to uncomment the create_tier service in docker-compose.yml.

4.4 Database Migrations

While in the src folder, run Alembic migrations:

poetry run alembic revision --autogenerate

And to apply the migration

poetry run alembic upgrade head

[!NOTE]

If you do not have poetry, you may run it without poetry after running pip install alembic

5. Extending

5.1 Project Structure

First, you may want to take a look at the project structure and understand what each file is doing.

.                                      # FastAPI-boilerplate folder. Rename it to suit your project name
โ”œโ”€โ”€ Dockerfile                         # Dockerfile for building the application container.
โ”œโ”€โ”€ LICENSE.md                         # License file for the project.
โ”œโ”€โ”€ README.md                          # Project README providing information and instructions.
โ”œโ”€โ”€ docker-compose.yml                 # Docker Compose file for defining multi-container applications.
โ”‚
โ””โ”€โ”€ src                                # Source code directory.
    โ”œโ”€โ”€ __init__.py                    # Initialization file for the src package.
    โ”œโ”€โ”€ alembic.ini                    # Configuration file for Alembic (database migration tool).
    โ”œโ”€โ”€ poetry.lock                    # Poetry lock file specifying exact versions of dependencies.
    โ”œโ”€โ”€ pyproject.toml                 # Configuration file for Poetry, lists project dependencies.
    โ”‚
    โ”œโ”€โ”€ app                            # Main application directory.
    โ”‚   โ”œโ”€โ”€ __init__.py                # Initialization file for the app package.
    โ”‚   โ”œโ”€โ”€ main.py                    # Entry point that imports and creates the FastAPI application instance.
    โ”‚   โ”œโ”€โ”€ worker.py                  # Worker script for handling background tasks.
    โ”‚   โ”‚
    โ”‚   โ”œโ”€โ”€ api                        # Folder containing API-related logic.
    โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
    โ”‚   โ”‚   โ”œโ”€โ”€ dependencies.py        # Defines dependencies that can be reused across the API endpoints.
    โ”‚   โ”‚   โ”œโ”€โ”€ exceptions.py          # Contains custom exceptions for the API.
    โ”‚   โ”‚   โ”œโ”€โ”€ paginated.py           # Utilities for paginated responses in APIs.
    โ”‚   โ”‚   โ”‚
    โ”‚   โ”‚   โ””โ”€โ”€ v1                     # Version 1 of the API.
    โ”‚   โ”‚       โ”œโ”€โ”€ __init__.py
    โ”‚   โ”‚       โ”œโ”€โ”€ login.py           # API routes related to user login.
    โ”‚   โ”‚       โ”œโ”€โ”€ logout.py          # API routes related to user logout (token blacklist).
    โ”‚   โ”‚       โ”œโ”€โ”€ posts.py           # API routes related to posts.
    โ”‚   โ”‚       โ”œโ”€โ”€ rate_limits.py     # API routes for rate limiting features.
    โ”‚   โ”‚       โ”œโ”€โ”€ tasks.py           # API routes related to background tasks.
    โ”‚   โ”‚       โ”œโ”€โ”€ tiers.py           # API routes for handling different user tiers.
    โ”‚   โ”‚       โ””โ”€โ”€ users.py           # API routes related to user management.
    โ”‚   โ”‚
    โ”‚   โ”œโ”€โ”€ core                       # Core utilities and configurations for the application.
    โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
    โ”‚   โ”‚   โ”œโ”€โ”€ cache.py               # Utilities related to caching.
    โ”‚   โ”‚   โ”œโ”€โ”€ config.py              # Application configuration settings.
    โ”‚   โ”‚   โ”œโ”€โ”€ database.py            # Database connectivity and session management.
    โ”‚   โ”‚   โ”œโ”€โ”€ exceptions.py          # Contains core custom exceptions for the application.
    โ”‚   โ”‚   โ”œโ”€โ”€ logger.py              # Logging utilities.
    โ”‚   โ”‚   โ”œโ”€โ”€ models.py              # Base models for the application.
    โ”‚   โ”‚   โ”œโ”€โ”€ queue.py               # Utilities related to task queues.
    โ”‚   โ”‚   โ”œโ”€โ”€ rate_limit.py          # Rate limiting utilities and configurations.
    โ”‚   โ”‚   โ”œโ”€โ”€ security.py            # Security utilities like password hashing and token generation.
    โ”‚   โ”‚   โ””โ”€โ”€ setup.py               # File defining settings and FastAPI application instance definition.
    โ”‚   โ”‚
    โ”‚   โ”œโ”€โ”€ crud                       # CRUD operations for the application.
    โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
    โ”‚   โ”‚   โ”œโ”€โ”€ crud_base.py           # Base CRUD operations class that can be extended by other CRUD modules.
    โ”‚   โ”‚   โ”œโ”€โ”€ crud_posts.py          # CRUD operations for posts.
    โ”‚   โ”‚   โ”œโ”€โ”€ crud_rate_limit.py     # CRUD operations for rate limiting configurations.
    โ”‚   โ”‚   โ”œโ”€โ”€ crud_tier.py           # CRUD operations for user tiers.
    โ”‚   โ”‚   โ”œโ”€โ”€ crud_token_blaclist.py # CRUD operations for token blacklist.
    โ”‚   โ”‚   โ”œโ”€โ”€ crud_users.py          # CRUD operations for users.
    โ”‚   โ”‚   โ””โ”€โ”€ helper.py              # Helper functions for CRUD operations.
    โ”‚   โ”‚
    โ”‚   โ”œโ”€โ”€ models                     # ORM models for the application.
    โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
    โ”‚   โ”‚   โ”œโ”€โ”€ post.py                # ORM model for posts.
    โ”‚   โ”‚   โ”œโ”€โ”€ rate_limit.py          # ORM model for rate limiting configurations.
    โ”‚   โ”‚   โ”œโ”€โ”€ tier.py                # ORM model for user tiers.
    โ”‚   โ”‚   โ”œโ”€โ”€ token_blacklist.py                # ORM model for token blacklist.
    โ”‚   โ”‚   โ””โ”€โ”€ user.py                # ORM model for users.
    โ”‚   โ”‚
    โ”‚   โ”œโ”€โ”€ schemas                    # Pydantic schemas for data validation.
    โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
    โ”‚   โ”‚   โ”œโ”€โ”€ job.py                 # Schemas related to background jobs.
    โ”‚   โ”‚   โ”œโ”€โ”€ post.py                # Schemas related to posts.
    โ”‚   โ”‚   โ”œโ”€โ”€ rate_limit.py          # Schemas for rate limiting configurations.
    โ”‚   โ”‚   โ”œโ”€โ”€ tier.py                # Schemas for user tiers.
    โ”‚   โ”‚   โ”œโ”€โ”€ token_blacklist.py     # Schemas for token blacklist.
    โ”‚   โ”‚   โ””โ”€โ”€ user.py                # Schemas related to users.
    โ”‚   โ”‚
    โ”‚   โ””โ”€โ”€ logs                       # Directory for log files.
    โ”‚       โ””โ”€โ”€ app.log                # Application log file.
    โ”‚
    โ”œโ”€โ”€ migrations                     # Directory for Alembic migrations.
    โ”‚   โ”œโ”€โ”€ README                     # General info and guidelines for migrations.
    โ”‚   โ”œโ”€โ”€ env.py                     # Environment configurations for Alembic.
    โ”‚   โ”œโ”€โ”€ script.py.mako             # Template script for migration generation.
    โ”‚   โ”‚
    โ”‚   โ””โ”€โ”€ versions                   # Folder containing individual migration scripts.
    โ”‚       โ””โ”€โ”€ README.MD
    โ”‚
    โ”œโ”€โ”€ scripts                        # Utility scripts for the project.
    โ”‚   โ”œโ”€โ”€ __init__.py
    โ”‚   โ”œโ”€โ”€ create_first_superuser.py  # Script to create the first superuser in the application.
    โ”‚   โ””โ”€โ”€ create_first_tier.py       # Script to create the first user tier in the application.
    โ”‚
    โ””โ”€โ”€ tests                          # Directory containing all the tests.
        โ”œโ”€โ”€ __init__.py                # Initialization file for the tests package.
        โ”œโ”€โ”€ conftest.py                # Configuration and fixtures for pytest.
        โ”œโ”€โ”€ helper.py                  # Helper functions for writing tests.
        โ””โ”€โ”€ test_user.py               # Tests related to the user model and endpoints.

5.2 Database Model

Create the new entities and relationships and add them to the model
diagram

5.2.1 Token Blacklist

Note that this table is used to blacklist the JWT tokens (it's how you log a user out)
diagram

5.3 SQLAlchemy Models

Inside app/models, create a new entity.py for each new entity (replacing entity with the name) and define the attributes according to SQLAlchemy 2.0 standards:

Warning

Note that since it inherits from Base, the new model is mapped as a python dataclass, so optional attributes (arguments with a default value) should be defined after required attributes.

from sqlalchemy import String, DateTime
from sqlalchemy.orm import Mapped, mapped_column, relationship

from app.core.database import Base

class Entity(Base):
  __tablename__ = "entity"

  id: Mapped[int] = mapped_column(
    "id", autoincrement=True, nullable=False, unique=True, primary_key=True, init=False
  )
  name: Mapped[str] = mapped_column(String(30))
  ...

5.4 Pydantic Schemas

Inside app/schemas, create a new entity.py for for each new entity (replacing entity with the name) and create the schemas according to Pydantic V2 standards:

from typing import Annotated

from pydantic import BaseModel, EmailStr, Field, HttpUrl, ConfigDict

class EntityBase(BaseModel):
  name: Annotated[
    str, 
    Field(min_length=2, max_length=30, examples=["Entity Name"])
    ...
  ]

class Entity(EntityBase):
  ...

class EntityRead(EntityBase):
  ...

class EntityCreate(EntityBase):
  ...

class EntityCreateInternal(EntityCreate):
  ...

class EntityUpdate(BaseModel):
  ...

class EntityUpdateInternal(BaseModel):
  ...

class EntityDelete(BaseModel):
    model_config = ConfigDict(extra='forbid')

    is_deleted: bool
    deleted_at: datetime

5.5 Alembic Migrations

Then, while in the src folder, run Alembic migrations:

poetry run alembic revision --autogenerate

And to apply the migration

poetry run alembic upgrade head

5.6 CRUD

Inside app/crud, create a new crud_entities.py inheriting from CRUDBase for each new entity:

from app.crud.crud_base import CRUDBase
from app.models.entity import Entity
from app.schemas.entity import EntityCreateInternal, EntityUpdate, EntityUpdateInternal, EntityDelete

CRUDEntity = CRUDBase[Entity, EntityCreateInternal, EntityUpdate, EntityUpdateInternal, EntityDelete]
crud_entity = CRUDEntity(Entity)

So, for users:

# crud_users.py
from app.model.user import User
from app.schemas.user import UserCreateInternal, UserUpdate, UserUpdateInternal, UserDelete

CRUDUser = CRUDBase[User, UserCreateInternal, UserUpdate, UserUpdateInternal, UserDelete]
crud_users = CRUDUser(User)

When actually using the crud in an endpoint, to get data you just pass the database connection and the attributes as kwargs:

# Here I'm getting the first user with email == user.email (email is unique in this case)
user = await crud_users.get(db=db, email=user.email)

To get a list of objects with the attributes, you should use the get_multi:

# Here I'm getting at most 10 users with the name 'User Userson' except for the first 3
user = await crud_users.get_multi(
  db=db,
  offset=3,
  limit=100,
  name="User Userson"
)

Warning

Note that get_multi returns a python dict.

Which will return a python dict with the following structure:

{
  "data": [
    {
      "id": 4,
      "name": "User Userson",
      "username": "userson4",
      "email": "[email protected]",
      "profile_image_url": "https://profileimageurl.com"
    },
    {
      "id": 5,
      "name": "User Userson",
      "username": "userson5",
      "email": "[email protected]",
      "profile_image_url": "https://profileimageurl.com"
    }
  ],
  "total_count": 2,
  "has_more": false,
  "page": 1,
  "items_per_page": 10
}

To create, you pass a CreateSchemaType object with the attributes, such as a UserCreate pydantic schema:

from app.core.schemas.user import UserCreate

# Creating the object
user_internal = UserCreate(
  name="user",
  username="myusername",
  email="[email protected]"
)

# Passing the object to be created
crud_users.create(db=db, object=user_internal)

To just check if there is at least one row that matches a certain set of attributes, you should use exists

# This queries only the email variable
# It returns True if there's at least one or False if there is none
crud_users.exists(db=db, email=user@example.com)

You can also get the count of a certain object with the specified filter:

# Here I'm getting the count of users with the name 'User Userson'
user = await crud_users.count(
  db=db,
  name="User Userson"
)

To update you pass an object which may be a pydantic schema or just a regular dict, and the kwargs. You will update with objects the rows that match your kwargs.

# Here I'm updating the user with username == "myusername". 
# #I'll change his name to "Updated Name"
crud_users.update(db=db, object={name="Updated Name"}, username="myusername")

To delete we have two options:

  • db_delete: actually deletes the row from the database
  • delete:
    • adds "is_deleted": True and deleted_at: datetime.utcnow() if the model inherits from PersistentDeletion (performs a soft delete), but keeps the object in the database.
    • actually deletes the row from the database if the model does not inherit from PersistentDeletion
# Here I'll just change is_deleted to True
crud_users.delete(db=db, username="myusername")

# Here I actually delete it from the database
crud_users.db_delete(db=db, username="myusername")

More Efficient Selecting

For the get and get_multi methods we have the option to define a schema_to_select attribute, which is what actually makes the queries more efficient. When you pass a pydantic schema (preferred) or a list of the names of the attributes in schema_to_select to the get or get_multi methods, only the attributes in the schema will be selected.

from app.schemas.user import UserRead
# Here it's selecting all of the user's data
crud_user.get(db=db, username="myusername")

# Now it's only selecting the data that is in UserRead. 
# Since that's my response_model, it's all I need
crud_user.get(db=db, username="myusername", schema_to_select=UserRead)

5.7 Routes

Inside app/api/v1, create a new entities.py file and create the desired routes

from typing import Annotated

from fastapi import Depends

from app.schemas.entity import EntityRead
from app.core.database import async_get_db
...

router = fastapi.APIRouter(tags=["entities"])

@router.get("/entities/{id}", response_model=List[EntityRead])
async def read_entities(
  request: Request,
  id: int,
  db: Annotated[AsyncSession, Depends(async_get_db)]
):
  entity = await crud_entities.get(db=db, id=id)  
  
  return entity

...

Then in app/api/v1/__init__.py add the router such as:

from fastapi import APIRouter
from app.api.v1.entity import router as entity_router
...

router = APIRouter(prefix="/v1") # this should be there already
...
router.include_router(entity_router)

5.7.1 Paginated Responses

With the get_multi method we get a python dict with full suport for pagination:

{
  "data": [
    {
      "id": 4,
      "name": "User Userson",
      "username": "userson4",
      "email": "[email protected]",
      "profile_image_url": "https://profileimageurl.com"
    },
    {
      "id": 5,
      "name": "User Userson",
      "username": "userson5",
      "email": "[email protected]",
      "profile_image_url": "https://profileimageurl.com"
    }
  ],
  "total_count": 2,
  "has_more": false,
  "page": 1,
  "items_per_page": 10
} 

And in the endpoint, we can import from app/api/paginated the following functions and Pydantic Schema:

from app.api.paginated import (
  PaginatedListResponse, # What you'll use as a response_model to validate
  paginated_response,    # Creates a paginated response based on the parameters
  compute_offset         # Calculate the offset for pagination ((page - 1) * items_per_page)
)

Then let's create the endpoint:

import fastapi

from app.schemas.entity imoport EntityRead
...

@router.get("/entities", response_model=PaginatedListResponse[EntityRead])
async def read_entities(
    request: Request, 
    db: Annotated[AsyncSession, Depends(async_get_db)],
    page: int = 1,
    items_per_page: int = 10
):
    entities_data = await crud_entity.get_multi(
        db=db,
        offset=compute_offset(page, items_per_page),
        limit=items_per_page,
        schema_to_select=UserRead, 
        is_deleted=False
    )
    
    return paginated_response(
        crud_data=entities_data, 
        page=page,
        items_per_page=items_per_page
    )

5.8 Caching

The cache decorator allows you to cache the results of FastAPI endpoint functions, enhancing response times and reducing the load on your application by storing and retrieving data in a cache.

Caching the response of an endpoint is really simple, just apply the cache decorator to the endpoint function.

Warning

Note that you should always pass request as a variable to your endpoint function if you plan to use the cache decorator.

...
from app.core.cache import cache

@app.get("/sample/{my_id}")
@cache(
    key_prefix="sample_data",
    expiration=3600,
    resource_id_name="my_id"
)
async def sample_endpoint(request: Request, my_id: int):
    # Endpoint logic here
    return {"data": "my_data"}

The way it works is:

  • the data is saved in redis with the following cache key: sample_data:{my_id}
  • then the time to expire is set as 3600 seconds (that's the default)

Another option is not passing the resource_id_name, but passing the resource_id_type (default int):

...
from app.core.cache import cache

@app.get("/sample/{my_id}")
@cache(
    key_prefix="sample_data",
    resource_id_type=int
)
async def sample_endpoint(request: Request, my_id: int):
    # Endpoint logic here
    return {"data": "my_data"}

In this case, what will happen is:

  • the resource_id will be inferred from the keyword arguments (my_id in this case)
  • the data is saved in redis with the following cache key: sample_data:{my_id}
  • then the the time to expire is set as 3600 seconds (that's the default)

Passing resource_id_name is usually preferred.

5.9 More Advanced Caching

The behaviour of the cache decorator changes based on the request method of your endpoint. It caches the result if you are passing it to a GET endpoint, and it invalidates the cache with this key_prefix and id if passed to other endpoints (PATCH, DELETE).

Invalidating Extra Keys

If you also want to invalidate cache with a different key, you can use the decorator with the to_invalidate_extra variable.

In the following example, I want to invalidate the cache for a certain user_id, since I'm deleting it, but I also want to invalidate the cache for the list of users, so it will not be out of sync.

# The cache here will be saved as "{username}_posts:{username}":
@router.get("/{username}/posts", response_model=List[PostRead])
@cache(key_prefix="{username}_posts", resource_id_name="username")
async def read_posts(
    request: Request,
    username: str, 
    db: Annotated[AsyncSession, Depends(async_get_db)]
):
    ...

...

# Invalidating cache for the former endpoint by just passing the key_prefix and id as a dictionary:
@router.delete("/{username}/post/{id}")
@cache(
    "{username}_post_cache", 
    resource_id_name="id", 
    to_invalidate_extra={"{username}_posts": "{username}"} # also invalidate "{username}_posts:{username}" cache
)
async def erase_post(
    request: Request, 
    username: str,
    id: int,
    current_user: Annotated[UserRead, Depends(get_current_user)],
    db: Annotated[AsyncSession, Depends(async_get_db)]
):
    ...

# And now I'll also invalidate when I update the user:
@router.patch("/{username}/post/{id}", response_model=PostRead)
@cache(
    "{username}_post_cache", 
    resource_id_name="id", 
    to_invalidate_extra={"{username}_posts": "{username}"} 
)
async def patch_post(
    request: Request,
    username: str,
    id: int,
    values: PostUpdate,
    current_user: Annotated[UserRead, Depends(get_current_user)],
    db: Annotated[AsyncSession, Depends(async_get_db)]
):
    ...

Warning

Note that adding to_invalidate_extra will not work for GET requests.

Invalidate Extra By Pattern

Let's assume we have an endpoint with a paginated response, such as:

@router.get("/{username}/posts", response_model=PaginatedListResponse[PostRead])
@cache(
    key_prefix="{username}_posts:page_{page}:items_per_page:{items_per_page}", 
    resource_id_name="username",
    expiration=60
)
async def read_posts(
    request: Request,
    username: str,
    db: Annotated[AsyncSession, Depends(async_get_db)],
    page: int = 1,
    items_per_page: int = 10
):
    db_user = await crud_users.get(db=db, schema_to_select=UserRead, username=username, is_deleted=False)
    if not db_user:
        raise HTTPException(status_code=404, detail="User not found")

    posts_data = await crud_posts.get_multi(
        db=db,
        offset=compute_offset(page, items_per_page),
        limit=items_per_page,
        schema_to_select=PostRead,
        created_by_user_id=db_user["id"],
        is_deleted=False
    )

    return paginated_response(
        crud_data=posts_data, 
        page=page, 
        items_per_page=items_per_page
    )

Just passing to_invalidate_extra will not work to invalidate this cache, since the key will change based on the page and items_per_page values. To overcome this we may use the pattern_to_invalidate_extra parameter:

@router.patch("/{username}/post/{id}")
@cache(
    "{username}_post_cache", 
    resource_id_name="id", 
    pattern_to_invalidate_extra=["{username}_posts:*"]
)
async def patch_post(
    request: Request,
    username: str,
    id: int,
    values: PostUpdate,
    current_user: Annotated[UserRead, Depends(get_current_user)],
    db: Annotated[AsyncSession, Depends(async_get_db)]
):
...

Now it will invalidate all caches with a key that matches the pattern "{username}_posts:*, which will work for the paginated responses.

Caution

Using pattern_to_invalidate_extra can be resource-intensive on large datasets. Use it judiciously and consider the potential impact on Redis performance. Be cautious with patterns that could match a large number of keys, as deleting many keys simultaneously may impact the performance of the Redis server.

Client-side Caching

For client-side caching, all you have to do is let the Settings class defined in app/core/config.py inherit from the ClientSideCacheSettings class. You can set the CLIENT_CACHE_MAX_AGE value in .env, it defaults to 60 (seconds).

5.10 ARQ Job Queues

Create the background task in app/worker.py:

...
# -------- background tasks --------
async def sample_background_task(ctx, name: str) -> str:
    await asyncio.sleep(5)
    return f"Task {name} is complete!"

Then add the function to the WorkerSettings class functions variable:

# -------- class --------
...
class WorkerSettings:
    functions = [sample_background_task]
    ...

Add the task to be enqueued in a POST endpoint and get the info in a GET:

...
@router.post("/task", response_model=Job, status_code=201)
async def create_task(message: str):
    job = await queue.pool.enqueue_job("sample_background_task", message)
    return {"id": job.job_id}


@router.get("/task/{task_id}")
async def get_task(task_id: str):
    job = ArqJob(task_id, queue.pool)
    return await job.info()

And finally run the worker in parallel to your fastapi application.

If you are using docker compose, the worker is already running. If you are doing it from scratch, run while in the src folder:

poetry run arq app.worker.WorkerSettings

5.11 Rate Limiting

To limit how many times a user can make a request in a certain interval of time (very useful to create subscription plans or just to protect your API against DDOS), you may just use the rate_limiter dependency:

from fastapi import Depends

from app.api.dependencies import rate_limiter
from app.core import queue
from app.schemas.job import Job

@router.post("/task", response_model=Job, status_code=201, dependencies=[Depends(rate_limiter)])
async def create_task(message: str):
    job = await queue.pool.enqueue_job("sample_background_task", message)
    return {"id": job.job_id}

By default, if no token is passed in the header (that is - the user is not authenticated), the user will be limited by his IP address with the default limit (how many times the user can make this request every period) and period (time in seconds) defined in .env.

Even though this is useful, real power comes from creating tiers (categories of users) and standard rate_limits (limits and periods defined for specific paths - that is - endpoints) for these tiers.

All of the tier and rate_limit models, schemas, and endpoints are already created in the respective folders (and usable only by superusers). You may use the create_tier script to create the first tier (it uses the .env variable TIER_NAME, which is all you need to create a tier) or just use the api:

Here I'll create a free tier:

passing name = free to api request body

And a pro tier:

passing name = pro to api request body

Then I'll associate a rate_limit for the path api/v1/tasks/task for each of them, I'll associate a rate limit for the path api/v1/tasks/task.

Warning

Do not forget to add api/v1/... or any other prefix to the beggining of your path. For the structure of the boilerplate, api/v1/<rest_of_the_path>

1 request every hour (3600 seconds) for the free tier:

passing path=api/v1/tasks/task, limit=1, period=3600, name=api_v1_tasks:1:3600 to free tier rate limit

10 requests every hour for the pro tier:

passing path=api/v1/tasks/task, limit=10, period=3600, name=api_v1_tasks:10:3600 to pro tier rate limit

Now let's read all the tiers available (GET api/v1/tiers):

{
  "data": [
    {
      "name": "free",
      "id": 1,
      "created_at": "2023-11-11T05:57:25.420360"
    },
    {
      "name": "pro",
      "id": 2,
      "created_at": "2023-11-12T00:40:00.759847"
    }
  ],
  "total_count": 2,
  "has_more": false,
  "page": 1,
  "items_per_page": 10
}

And read the rate_limits for the pro tier to ensure it's working (GET api/v1/tier/pro/rate_limits):

{
  "data": [
    {
      "path": "api_v1_tasks_task",
      "limit": 10,
      "period": 3600,
      "id": 1,
      "tier_id": 2,
      "name": "api_v1_tasks:10:3600"
    }
  ],
  "total_count": 1,
  "has_more": false,
  "page": 1,
  "items_per_page": 10
}

Now, whenever an authenticated user makes a POST request to the api/v1/tasks/task, they'll use the quota that is defined by their tier. You may check this getting the token from the api/v1/login endpoint, then passing it in the request header:

curl -X POST 'http://127.0.0.1:8000/api/v1/tasks/task?message=test' \
-H 'Authorization: Bearer <your-token-here>'

Tip

Since the rate_limiter dependency uses the get_optional_user dependency instead of get_current_user, it will not require authentication to be used, but will behave accordingly if the user is authenticated (and token is passed in header). If you want to ensure authentication, also use get_current_user if you need.

To change a user's tier, you may just use the PATCH api/v1/user/{username}/tier endpoint. Note that for flexibility (since this is a boilerplate), it's not necessary to previously inform a tier_id to create a user, but you probably should set every user to a certain tier (let's say free) once they are created.

Warning

If a user does not have a tier or the tier does not have a defined rate limit for the path and the token is still passed to the request, the default limit and period will be used, this will be saved in app/logs.

5.12 Running

If you are using docker compose, just running the following command should ensure everything is working:

docker compose up

If you are doing it from scratch, ensure your postgres and your redis are running, then while in the src folder, run to start the application with uvicorn server:

poetry run uvicorn app.main:app --reload

And for the worker:

poetry run arq app.worker.WorkerSettings

6. Running in Production

In production you probably should run using gunicorn workers:

command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000

Here it's running with 4 workers, but you should test it depending on how many cores your machine has.

To do this if you are using docker compose, just replace the comment: This part in docker-compose.yml:

# -------- replace with comment to run with gunicorn --------
command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
# command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000

Should be changed to:

# -------- replace with comment to run with uvicorn --------
# command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000

Caution

Do not forget to set the ENVIRONMENT in .env to production unless you want the API docs to be public.

More on running it in production later.

7. Testing

For tests, ensure you have in .env:

# ------------- test -------------
TEST_NAME="Tester User"
TEST_EMAIL="[email protected]"
TEST_USERNAME="testeruser"
TEST_PASSWORD="Str1ng$t"

While in the tests folder, create your test file with the name "test_{entity}.py", replacing entity with what you're testing

touch test_items.py

Finally create your tests (you may want to copy the structure in test_user.py)

Now, to run:

7.1 Docker Compose

First you need to uncomment the following part in the docker-compose.yml file:

  # #-------- uncomment to run tests --------
  # pytest:
  #   build: 
  #     context: .
  #     dockerfile: Dockerfile 
  #   env_file:
  #     - ./src/.env 
  #   depends_on:
  #     - db
  #     - create_superuser
  #     - redis
  #   command: python -m pytest
  #   volumes:
  #     - ./src:/code/src

You'll get:

  #-------- uncomment to run tests --------
  pytest:
    build: 
      context: .
      dockerfile: Dockerfile 
    env_file:
      - ./src/.env 
    depends_on:
      - db
      - create_superuser
      - redis
    command: python -m pytest
    volumes:
      - ./src:/code/src

Start the Docker Compose services:

docker-compose up -d

It will automatically run the tests, but if you want to run again later:

docker-compose run --rm pytest

7.2 From Scratch

While in the src folder, run:

poetry run python -m pytest

8. Contributing

Contributions are appreciated, even if just reporting bugs, documenting stuff or answering questions. To contribute with a feature:

  1. Fork it (https://github.com/igormagalhaesr/FastAPI-boilerplate)
  2. Create your feature branch (git checkout -b feature/fooBar)
  3. Test your changes while in the src folder poetry run python -m pytest
  4. Commit your changes (git commit -am 'Add some fooBar')
  5. Push to the branch (git push origin feature/fooBar)
  6. Create a new Pull Request

9. References

This project was inspired by a few projects, it's based on them with things changed to the way I like (and pydantic, sqlalchemy updated)

10. License

MIT

11. Contact

Igor Magalhaes โ€“ @igormagalhaesr โ€“ [email protected] github.com/igorbenav

fastapi-template's People

Contributors

igorbenav avatar fcaspani01 avatar ahsansheraz avatar lucasqr avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.