Django + SQLite + Docker in Local Production

Shahar Gino
5 min readNov 12, 2020

--

In this article, we’ll first provide a brief introduction for the titled problem, and then, will address it with a technical step-by-step guide.

Readers which are also interested in non-SQLite combinations (with Django+Docker) might also find this article interesting, since the database change does not play a dramatic role in that context.

Introduction

Django is a high-level Python web framework that enables rapid development of secure and maintainable websites. Built by experienced developers, Django takes care of much of the hassle of web development, so you can focus on writing your app without needing to reinvent the wheel. It is free and open source, has a thriving and active community, great documentation, and many options for free and paid-for support.

Django applies by-default SQLite database configuration, which is an easy choice for a fast ramp-up. SQLite does have several advantages, but yet suffers from several limitations, and thereby various users eventually switch for a more robust database setup, e.g. MySQL, Postgres, Oracle, etc. However, in some cases, those limitations are tolerable or irrelevant for the specific usage, so that SQLite remains a proper choice to stick with.

A Django-based production requires a stable workflow, which yet flexible in terms of different machines, different OSs, etc. Perhaps the most sensitive stage in the production is the initial setup, on an unknown new device. Several “surprises” might pop-up during that phase, due to various variations between the development-machine to the new device, and also due to conflicts between unfrozen libraries (means, few libraries might got updated, and thereby could reveal conflicts among them).

For that end, most Python developers apply a virtual-environment technique, e.g. virtualenv, conda, anaconda, etc. A virtual-environment is a Python tool for dependency management and project isolation. They allow Python site packages (3rd party libraries) to be installed locally in an isolated directory for a particular project, as opposed to being installed globally (i.e. as part of a system-wide Python). In other words, virtual-environment just sets a fresh python installation in path which can be used to install dependencies particular to your project without polluting global python installation and thus managing versioning as well.

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package. Docker container encapsulates an entire OS and provides isolation of OS whereas a virtual-environment only encapsulates Python dependencies which enables switching between Python versions and dependencies, but you are dependent with your host OS. With Docker, not just one can have an isolated python installation but can also have other os specific dependencies installed like ffmpeg, lapack, arpack etc. A new developer can spin up a container from docker image having all dependencies (python and OS) installed.

Docker enables portability from one machine to another whereas with virtual-environment you will have to do the installation in another place. Docker isolates all of the additional system dependencies too and provides a completely isolated environment. It is containerization platform which is used to package your application and all its dependencies together in the form of containers so to make sure that your application works seamlessly in any environment which can be development or test or production. It helps to ensure that the Dev and Production environment are completely same.

Technical Step-by-Step guide

Perquisites

Notes:

  • For OSX and Windows, it’s recommended to install Docker Desktop, which already includes Docker Compose
  • For Linux, it’s recommended to follow the below guides:
  1. https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
  2. https://www.digitalocean.com/community/tutorials/how-to-install-docker-compose-on-ubuntu-18-04 )
  • DockerHub is optional, although recommended

Docker Login

Login is required, once completing the perquisites:

% docker login --username username

Source Machine (export)

  1. Prepare a working Django+SQLite environment, which you would like to export out. It’s recommended to prepare it with a virtual-environment.
  2. CD to your Django project home folder
  3. Add a new file, termed Dockerfile, with the following content:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/

In the above example, we integrate python3.8, but it’s of-course possible to pick a different version (or resource).

4. Add a new file, termed docker-compose.yml, with the following content:

web:
build: .
environment:
MYENV: EXAMPLE
volumes:
- .:/code
web_migrate:
extends:
service: web
command: python manage.py migrate
web_run:
extends:
service: web
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"

In the above example, we define a 3 services for building, migrating and server-launching, respectively.

The building service (web) sets a dummy environment variable (just for this example), and maps the Docker folder (typically stored at /var/lib/docker/) to the host-folder at current working directory, i.e. Django project home folder.

The migration (web_migrate) and the server-launching (web_run) services extends the building service (web) and apply their designated command. In addition, the server-launching service also applies ports mapping.

5. Add ‘0.0.0.0’ to ALLOWED_HOSTS in your Django settings.py file

6. Build Docker Images and Containers:

% docker-compose up -d

At that point, the local server is initiated and accessible at http://0.0.0.0:8000

You may observe the running Containers, with the Common Docker Commands, which are provided at the end of this article.

7. Export the Docker images:

Without DockerHub:

% docker save $(docker images --format '{{.Repository}}:{{.Tag}}') -o myfilename.tar
% gzip myfilename.tar

Of course, myfilename.tar is just a place-holder, so you may pick a more informative output filename.

With DockerHub:

Create a new (private) repository at DockerHub, e.g. marktwain/myrepo

Tag and push each image:

% docker tag mytag_web:latest marktwain/ç:web 
% docker push marktwain/myrepo:web
% docker tag mytag_web_migrate:latest marktwain/myrepo:web
% docker push marktwain/myrepo:web_init
% docker tag mytag_web_run:latest marktwain/myrepo:web
% docker push marktwain/myrepo:web_run

Once again, mytag and marktwain/myrepo are just place-holders, and you may pick a more informative naming.

Target Machine (import)

  1. Clone the Django project, e.g. from git:
% git clone <remote_url> --recurse-submodules

The --recurse-submodules is not mandatory, but is a good habit.

2. CD to the local repository, which you’ve just cloned

3. Import Docker Images:

Without DockerHub:

Extract and Load the Images from the tarball file:

% gunzip myfilename.tar.gz
% docker load -i myfilename.tar

With DockerHub:

Pull all images from the DockerHub repository:

% docker pull -all-tags marktwain/myrepo

Run the Docker Images:

% docker-compose run web_migrate
% docker-compose run -d --service-ports web_run

At that point, the local server is initiated and accessible at http://0.0.0.0:8000

You may observe the running Containers, with the Common Docker Commands, which are provided hereby below.

Common Docker Commands

View Docker Containers

% docker ps -a 

View Docker Images

% docker images

Run a Container (from Image)

% docker-compose run <service_name> 

Get inside a Docker Container

% docker exec -it <container_id> bash

Clean Containers and Images

% docker rm $(docker ps -qa — no-trunc — filter status=exited) % docker rmi image $(docker images -q) -f

Stop and Resume the Local Server

% docker stop <container_name> % docker start <container_name>

--

--