Modern DevOps with Django par Jacob Cook (Advanced tutorial)

tree

pvergain@uc026:/mnt/y/projects_id3/P5N001/XLOGCA135_tutorial_docker/tutorial_docker/tutoriels/modern_devops$ tree
├── modern-devops-django-sample
│   ├── docker-compose.ci.yml
│   ├── docker-compose.prod.yml
│   ├── docker-compose.staging.yml
│   ├── docker-compose.test.yml
│   ├── docker-compose.yml
│   ├── Dockerfile
│   ├── LICENSE
│   ├── manage.py
│   ├── modern_devops
│   │   ├── __init__.py
│   │   ├── settings.py
│   │   ├── urls.py
│   │   └── wsgi.py
│   ├── myapp
│   │   ├── admin.py
│   │   ├── apps.py
│   │   ├── __init__.py
│   │   ├── migrations
│   │   │   └── __init__.py
│   │   ├── models.py
│   │   ├── tests.py
│   │   └── views.py
│   ├── README.md
│   ├── requirements.txt
│   └── uwsgi.ini
└── modern_devops.rst

Dockerfile Jacob Cook

FROM python:3-alpine3.6

ENV PYTHONUNBUFFERED=1

RUN apk add --no-cache linux-headers bash gcc \
    musl-dev libjpeg-turbo-dev libpng libpq \
    postgresql-dev uwsgi uwsgi-python3 git \
    zlib-dev libmagic

WORKDIR /site
COPY ./ /site
RUN pip install -U -r /site/requirements.txt
CMD python manage.py migrate && uwsgi --ini=/site/uwsgi.ini

First things first is our Dockerfile. This is the configuration that takes a base image (in our case Python 3.6 installed on a thin copy of Alpine Linux) and installs everything our application needs to run, including our Python dependencies.

It also sets a default command to use - this is the command that will be executed each time our container starts up in production.

We want it to check for any pending migrations, run them, then start up our uWSGI server to make our application available to the Internet. It’s safe to do this because if any migrations failed after our automatic deployments to staging, we would be able to recover from that and make the necessary changes before we tag a release and deploy to production.

This Dockerfile example builds a container with necessary dependencies for things like image uploads as well as connections to a PostgreSQL database.

docker-compose.yml Jacob Cook

We can now build our application with docker build -t myapp . and run it with docker run -it myapp. But in the case of our development environment, we are going to use Docker Compose in practice.

The Docker Compose configuration below is sufficient for our development environment, and will serve as a base for our configurations in staging and production, which can include things like Celery workers and monitoring services.

version: '3'

services:
  app:
    build: ./
    command: bash -c "python3 manage.py migrate && python3 manage.py runserver 0.0.0.0:8000"
    volumes:
      - ./:/site:rw
    depends_on:
      - postgresql
      - redis
    environment:
      DJANGO_SETTINGS_MODULE: myapp.settings.dev
    ports:
      - "8000:8000"

  postgresql:
    restart: always
    image: postgres:10-alpine
    volumes:
      - ./.dbdata:/var/lib/postgresql:rw
    environment:
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: myapp
      POSTGRES_DB: myapp

  redis:
    restart: always
    image: redis:latest

This is a pretty basic configuration - all we are doing is setting a startup command for our app (similar to the entrypoint in our Docker container, except this time we are going to run Django’s internal dev server instead) and initializing PostgreSQL and Redis containers that will be linked with it.

It’s important to note that volumes line in our app service — this is going to bind the current directory of source code on our host machine to the installation folder inside the container.

That way we can make changes to the code locally and still use the automatic reloading feature of the Django dev server.

At this point, all we need to do is docker-compose up, and our Django application will be listening on port 8000, just as if we were running it from a virtualenv locally. This configuration is perfectly suitable for developer environments — all anyone needs to do to get started using the exact same environment as you is to clone the Git repository and run docker-compose up !

Testing and Production

For testing your application, whether that’s on your local machine or via Gitlab CI, I’ve found it’s helpful to create a clone of this docker-compose.yml configuration and customize the command directive to instead run whatever starts your test suite. In my case, I use the Python coverage library, so I have a second file called docker-compose.test.yml which is exactly the same as the first, save for the command directive has been changed to:

command: bash -c "coverage run --source='.' manage.py test myapp && coverage report"

docker-compose.test.yml

version: '3'

services:
  app:
        build: ./
        command: bash -c "coverage run --source='.' manage.py test kanban && coverage report"
        volumes:
          - ./:/site:rw
        depends_on:
          - postgresql
          - redis
        environment:
          DJANGO_SETTINGS_MODULE: modern_devops.settings.test

  postgresql:
        restart: always
        image: postgres:10-alpine
        environment:
          POSTGRES_USER: myapp_test
          POSTGRES_PASSWORD: myapp_test
          POSTGRES_DB: myapp_test

  redis:
        restart: always
        image: redis:latest

Then, I run my test suite locally with:

docker-compose -p test -f docker-compose.test.yml up.

docker-compose.staging.yml

version: '3'

services:
  app:
        image: registry.gitlab.com/pathto/myapp:staging
        environment:
          DJANGO_SETTINGS_MODULE: modern_devops.settings.staging
        volumes:
          - /var/data/myapp/staging/settings.py:/site/modern_devops/settings/staging.py:ro
        depends_on:
          - postgresql
          - redis
        networks:
          - default
          - public

  postgresql:
        image: postgres:10-alpine
        volumes:
          - /var/data/realtime/myapp/staging/db:/var/lib/postgresql/data:rw
        environment:
          POSTGRES_USER: myapp_staging
          POSTGRES_PASSWORD: myapp_staging
          POSTGRES_DB: myapp_staging

  redis:
        image: redis:latest

networks:
  public:
        external: true

docker-compose.prod.yml

For production and staging environments, I do the same thing — duplicate the file with the few changes I need to make for the environment in particular. In this case, for production, I don’t want to provide a build path — I want to tell Docker that it needs to take my application from the container registry each time it starts up.

To do so, remove the build directive and add an image one like so:

image: registry.gitlab.com/pathto/myapp:prod
version: '3'

services:
  app:
        image: registry.gitlab.com/pathto/myapp:prod
        environment:
          DJANGO_SETTINGS_MODULE: modern_devops.settings.prod
        volumes:
          - /var/data/myapp/prod/settings.py:/site/modern_devops/settings/prod.py:ro
        depends_on:
          - postgresql
          - redis
        networks:
          - default
          - public

  postgresql:
        image: postgres:10-alpine
        volumes:
          - /var/data/realtime/myapp/prod/db:/var/lib/postgresql/data:rw
        environment:
          POSTGRES_USER: myapp_staging
          POSTGRES_PASSWORD: myapp_staging
          POSTGRES_DB: myapp_staging

  redis:
        image: redis:latest

networks:
  public:
        external: true