Book Image

Django 3 Web Development Cookbook - Fourth Edition

By : Aidas Bendoraitis, Jake Kronika
Book Image

Django 3 Web Development Cookbook - Fourth Edition

By: Aidas Bendoraitis, Jake Kronika

Overview of this book

Django is a web framework for perfectionists with deadlines, designed to help you build manageable medium and large web projects in a short time span. This fourth edition of the Django Web Development Cookbook is updated with Django 3's latest features to guide you effectively through the development process. This Django book starts by helping you create a virtual environment and project structure for building Python web apps. You'll learn how to build models, views, forms, and templates for your web apps and then integrate JavaScript in your Django apps to add more features. As you advance, you'll create responsive multilingual websites, ready to be shared on social networks. The book will take you through uploading and processing images, rendering data in HTML5, PDF, and Excel, using and creating APIs, and navigating different data types in Django. You'll become well-versed in security best practices and caching techniques to enhance your website's security and speed. This edition not only helps you work with the PostgreSQL database but also the MySQL database. You'll also discover advanced recipes for using Django with Docker and Ansible in development, staging, and production environments. By the end of this book, you will have become proficient in using Django's powerful features and will be equipped to create robust websites.
Table of Contents (15 chapters)

Working with Docker containers for Django, Gunicorn, Nginx, and PostgreSQL

Django projects depend not only on Python requirements, but also on many system requirements, such as a web server, database, server cache, and mail server. When developing a Django project, you need to ensure that all environments and all developers will have all the same requirements installed. One way to keep those dependencies in sync is to use Docker. With Docker, you can have different versions of the database, web, or other servers required individually for each project.

Docker is a system for creating configured, customized virtual machines called containers. It allows us to duplicate the setup of any production environment precisely. Docker containers are created from so-called Docker images. Images consist of layers (or instructions) on how to build the container. There can be an image for PostgreSQL, an image for Redis, an image for Memcached, and a custom image for your Django project, and all those images can be combined into accompanying containers with Docker Compose.

In this recipe, we will use a project boilerplate to set up a Django project with a PostgreSQL database, served by Nginx and Gunicorn, and manage all of them with Docker Compose.

Getting ready

First, you will need to install the Docker Engine, following the instructions at https://www.docker.com/get-started. This usually includes the Compose tool, which makes it possible to manage systems that require multiple containers, ideal for a fully isolated Django project. If it is needed separately, installation details for Compose are available at https://docs.docker.com/compose/install/​.

How to do it...

Let's explore the Django and Docker boilerplate:

  1. Download the code from https://github.com/archatas/django_docker to your computer to the ~/projects/django_docker directory, for example.
If you choose another directory, for example, myproject_docker, then you will have to do a global search and replace django_docker with myproject_docker.
  1. Open the docker-compose.yml file. There are three containers that need to be created: nginx, gunicorn, and db. Don't worry if it looks complicated; we'll describe it in detail later:
# docker-compose.yml
version: "3.7"

services:
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/home/myproject/static
- media_volume:/home/myproject/media
depends_on:
- gunicorn

gunicorn:
build:
context: .
args:
PIP_REQUIREMENTS: "${PIP_REQUIREMENTS}"
command: bash -c "/home/myproject/env/bin/gunicorn --workers 3
--bind 0.0.0.0:8000 myproject.wsgi:application"
depends_on:
- db
volumes:
- static_volume:/home/myproject/static
- media_volume:/home/myproject/media
expose:
- "8000"
environment:
DJANGO_SETTINGS_MODULE: "${DJANGO_SETTINGS_MODULE}"
DJANGO_SECRET_KEY: "${DJANGO_SECRET_KEY}"
DATABASE_NAME: "${DATABASE_NAME}"
DATABASE_USER: "${DATABASE_USER}"
DATABASE_PASSWORD: "${DATABASE_PASSWORD}"
EMAIL_HOST: "${EMAIL_HOST}"
EMAIL_PORT: "${EMAIL_PORT}"
EMAIL_HOST_USER: "${EMAIL_HOST_USER}"
EMAIL_HOST_PASSWORD: "${EMAIL_HOST_PASSWORD}"

db:
image: postgres:latest
restart: always
environment:
POSTGRES_DB: "${DATABASE_NAME}"
POSTGRES_USER: "${DATABASE_USER}"
POSTGRES_PASSWORD: "${DATABASE_PASSWORD}"
ports:
- 5432
volumes:
- postgres_data:/var/lib/postgresql/data/

volumes:
postgres_data:
static_volume:
media_volume:

  1. Open and read through the Dockerfile file. These are the layers (or instructions) that are needed to create the gunicorn container:
# Dockerfile
# pull official base image
FROM python:3.8

# accept arguments
ARG PIP_REQUIREMENTS=production.txt

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# install dependencies
RUN pip install --upgrade pip setuptools

# create user for the Django project
RUN useradd -ms /bin/bash myproject

# set current user
USER myproject

# set work directory
WORKDIR /home/myproject

# create and activate virtual environment
RUN python3 -m venv env

# copy and install pip requirements
COPY --chown=myproject ./src/myproject/requirements /home/myproject/requirements/
RUN ./env/bin/pip3 install -r /home/myproject/requirements/${PIP_REQUIREMENTS}

# copy Django project files
COPY --chown=myproject ./src/myproject /home/myproject/
  1. Copy the build_dev_example.sh script to build_dev.sh and edit its content. These are environment variables to pass to the docker-compose script:
# build_dev.sh
#!/usr/bin/env bash
DJANGO_SETTINGS_MODULE=myproject.settings.dev \
DJANGO_SECRET_KEY="change-this-to-50-characters-long-
random-string"
\
DATABASE_NAME=myproject \
DATABASE_USER=myproject \
DATABASE_PASSWORD="change-this-too" \
PIP_REQUIREMENTS=dev.txt \
docker-compose up --detach --build
  1. In a command-line tool, add execution permissions to build_dev.sh and run it to build the containers:
$ chmod +x build_dev.sh
$ ./build_dev.sh
  1. If you now go to http://0.0.0.0/en/, you should see a Hello, World! page there.
    When navigating to http://0.0.0.0/en/admin/, you should see the following:
OperationalError at /en/admin/
FATAL: role "myproject" does not exist

This means that you have to create the database user and the database in the Docker container.

  1. Let's SSH to the db container and create the database user, password, and the database itself in the Docker container:
$ docker exec -it django_docker_db_1 bash
/# su - postgres
/$ createuser --createdb --password myproject
/$ createdb --username myproject myproject

When asked, enter the same password for the database as in the build_dev.sh script.

Press [Ctrl + D] twice to log out of the PostgreSQL user and Docker container.

If you now go to http://0.0.0.0/en/admin/, you should see the following:

ProgrammingError at /en/admin/ relation "django_session" does not exist LINE 1: ...ession_data", "django_session"."expire_date" FROM "django_se...

This means that you have to run migrations to create the database schema.

  1. SSH into the gunicorn container and run the necessary Django management commands:
$ docker exec -it django_docker_gunicorn_1 bash
$ source env/bin/activate
(env)$ python manage.py migrate
(env)$ python manage.py collectstatic
(env)$ python manage.py createsuperuser

Answer all the questions that are asked by the management commands.

Press [Ctrl + D] twice to log out of the Docker container.

If you now navigate to http://0.0.0.0/en/admin/, you should see the Django administration, where you can log in with the super user's credentials that you have just created.

  1. Create analogous scripts, build_test.sh, build_staging.sh, and build_production.sh, where only the environment variables differ.

How it works...

The structure of the code in the boilerplate is similar to the one in a virtual environment. The project source files are in the src directory. We have the git-hooks directory for the pre-commit hook that is used to track the last modification date and the config directory for the configurations of the services used in the containers:

django_docker
├── config/
│ └── nginx/
│ └── conf.d/
│ └── myproject.conf
├── git-hooks/
│ ├── install_hooks.sh
│ └── pre-commit
├── src/
│ └── myproject/
│ ├── locale/
│ ├── media/
│ ├── myproject/
│ │ ├── apps/
│ │ │ └── __init__.py
│ │ ├── settings/
│ │ │ ├── __init__.py
│ │ │ ├── _base.py
│ │ │ ├── dev.py
│ │ │ ├── last-update.txt
│ │ │ ├── production.py
│ │ │ ├── staging.py
│ │ │ └── test.py
│ │ ├── site_static/
│ │ │ └── site/
│ │ │ ├── css/
│ │ │ ├── img/
│ │ │ ├── js/
│ │ │ └── scss/
│ │ ├── templates/
│ │ │ ├── base.html
│ │ │ └── index.html
│ │ ├── __init__.py
│ │ ├── urls.py
│ │ └── wsgi.py
│ ├── requirements/
│ │ ├── _base.txt
│ │ ├── dev.txt
│ │ ├── production.txt
│ │ ├── staging.txt
│ │ └── test.txt
│ ├── static/
│ └── manage.py
├── Dockerfile
├── LICENSE
├── README.md
├── build_dev.sh
├── build_dev_example.sh
└── docker-compose.yml

The main Docker-related configurations are at docker-compose.yml and Dockerfile. Docker Compose is a wrapper around Docker's command-line API. The build_dev.sh script builds and runs the Django project under the Gunicorn WSGI HTTP server at port 8000, Nginx at port 80 (serving static and media files and proxying other requests to Gunicorn), and the PostgreSQL database at port 5432.

In the docker-compose.yml file, the creation of three Docker containers is requested:

  • nginx for the Nginx web server
  • gunicorn for the Django project with the Gunicorn web server
  • db for the PostgreSQL database

The nginx and db containers will be created from the official images located at https://hub.docker.com. They have specific configuration parameters, such as the ports they are running on, environment variables, dependencies on other containers, and volumes.

Docker volumes are specific directories that stay untouched when you rebuild the Docker containers. Volumes need to be defined for the database data files, media, static, and the like.

The gunicorn container will be built from the instructions at the Dockerfile, defined by the build context in the docker-compose.yml file. Let's examine each layer (or instruction) there:

  • The gunicorn container will be based on the python:3.7 image
  • It will take PIP_REQUIREMENTS as an argument from the docker-compose.yml file
  • It will set environment variables for the container
  • It will install and upgrade pip, setuptools, and virtualenv
  • It will create a system user named myproject for the Django project
  • It will set myproject as the current user
  • It will set the home directory of the myproject user as the current working directory
  • It will create a virtual environment there
  • It will copy pip requirements from the base computer to the Docker container
  • It will install the pip requirements for the current environment defined by the PIP_REQUIREMENTS variable
  • It will copy the source of the entire Django project

The content of config/nginx/conf.d/myproject.conf will be saved under /etc/nginx/conf.d/ in the nginx container. This is the configuration of the Nginx web server telling it to listen to port 80 (the default HTTP port) and forward requests to the Gunicorn server on port 8000, except for requests asking for static or media content:

#/etc/nginx/conf.d/myproject.conf
upstream myproject {
server django_docker_gunicorn_1:8000;
}

server {
listen 80;

location / {
proxy_pass http://myproject;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}

rewrite "/static/\d+/(.*)" /static/$1 last;

location /static/ {
alias /home/myproject/static/;
}

location /media/ {
alias /home/myproject/media/;
}
}

You can learn more about Nginx and Gunicorn configurations in the Deploying on Nginx and Gunicorn for the staging environment and Deploying on Nginx and Gunicorn for the production environment recipes in Chapter 12, Deployment.

There's more...

You can destroy Docker containers with the docker-compose down command and rebuild them with your build script:

$ docker-compose down
$ ./build_dev.sh

If something is not working as expected, you can inspect the logs with the docker-compose logs command:

$ docker-compose logs nginx
$ docker-compose logs gunicorn
$ docker-compose logs db

To connect to any of the containers via SSH, you should use one of the following:

$ docker exec -it django_docker_gunicorn_1 bash
$ docker exec -it django_docker_nginx_1 bash
$ docker exec -it django_docker_db_1 bash

You can copy files and directories to and from volumes on Docker containers using the docker cp command:

$ docker cp ~/avatar.png django_docker_gunicorn_1:/home/myproject/media/
$ docker cp django_docker_gunicorn_1:/home/myproject/media ~/Desktop/

If you want to get better a understanding of Docker and Docker Compose, check out the official documentation at https://docs.docker.com/, and specifically https://docs.docker.com/compose/.

See also

  • The Creating a project file structure recipe
  • The Deploying on Apache with mod_wsgi for the staging environment recipe in Chapter 12, Deployment
  • The Deploying on Apache with mod_wsgi for the production environment recipe in Chapter 12, Deployment
  • The Deploying on Nginx and Gunicorn for the staging environment recipe in Chapter 12, Deployment
  • The Deploying on Nginx and Gunicorn for the production environment recipe in Chapter 12, Deployment