Dockerizing a Python web application
Understanding essential Docker concepts like Docker networks, named volumes and compose files by building a Python web application.

Agenda
We will understand how to deploy a Python web application using Docker.
Our application has the following tech stack:
- Flask
- MySQL
- Redis
Flask is the web framework. The web application uses a mysql database to persist data. It uses Redis as a caching layer. This mimics a real world application.
We will understand the following Docker concepts:
- Docker networks
- Docker volumes
- Communication between Docker containers
- Docker compose
This post assumes familiarity with basic Docker concepts like images, containers and Dockerfile.
Basic setup
Let's start writing our web application.
We will keep our application in a directory called flask-application
. Create this directory.
$ mkdir flask-application
$ cd flask-application
Our web application is very minimal and is contained in a single module named hello.py
. It has a single route/url which responds with 'Hello, World!'.
Create this module.
$ touch hello.py
Add following content to hello.py
.
# hello.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
We haven't connected to a database yet and would be doing so shortly.
As a best practice, we should create a requirements.txt
file and list all the Python package dependencies in it.
$ touch requirements.txt
Add following contents to requirements.txt
.
Flask
Let's create the Dockerfile which would create the needed docker image.
$ touch Dockerfile
Add following contents to it.
FROM python:3
WORKDIR /project
COPY . .
RUN pip install -r requirements.txt
CMD ["flask", "run", "--host=0.0.0.0"]
The Dockerfile
is very simple. We used several Docker instructions in our Dockerfile, like FROM
, WORKDIR
etc.
- We used a base image. Our base image is
python:3
. - We set the current directory to be
/project
. This directory could be named anything. You could keep it as say/source
. - We would need our source code to be present on the container. Hence we used the
COPY
command to copy our source code to the docker container filesystem.
Let's build an image for this Dockerfile.
$ docker build -t flask-application .
This would build an image with name flask-application
.
Let's run a container using this image.
$ docker run --env FLASK_APP=hello.py -p 5000:5000 --name web-application flask-application
flask run
relies on an environment variable called FLASK_APP
to decide the entry-point for the flask application. Hence we used --env
so that appropriate environment variable is set on the container.
We have mapped port 5000 of the container to the host's port 5000 so that we could access the application from host machine using localhost:5000
.
We have also set a name for the container so that it's easy to refer to this container. If we don't provide a name, Docker would generate a random name for the container. We would need to refer to this container when we want to stop or remove it, hence the explicit name should help.
Navigate to http://localhost:5000
on your host machine. You should be greeted with Hello, World!
.
This confirms that the docker setup for web application is complete.
Docker volume
Say we want to persist the number of requests made to this application. Ideally we would use a caching layer for it. But for demonstration purpose, let's use a MySQL database.
One alternative is that we add database support in the existing Dockerfile i.e in image flask-application
.
However, Docker best practices suggest that a Docker container should run a single service and there should be separate containers for separate services. Hence mysql should be run on a separate container.
We will use mysql
Docker image to run a mysql container.
Also we need ability to connect to database container from web application container. Two Docker containers can easily communicate with each other if they are on the same Docker network.
Hence, let's create a Docker network named application
.
$ docker network create application
Let's start a mysql docker container on this network.
$ docker run --env MYSQL_ROOT_PASSWORD=secret --network application --name database mysql
This starts a Docker container which has mysql service running. mysql
Docker image requires that the environment variables MYSQL_ROOT_PASSWORD
be set when starting the container.
Also we named the mysql container as database
.
We need to create a database to which our application can connect. For this we would need to get shell access to the mysql container.
Issuing following command gets us shell access to the container.
$ docker exec -it database /bin/sh
Once you have shell access to the container, use command mysql
to get access to the mysql shell.
# mysql -u root -h localhost -p
As we have provided MYSQL_ROOT_PASSWORD
as secret
while starting the container, hence provide the same password. You should have mysql terminal access now.
Issue the following mysql commands on the mysql terminal to create database, create a table and add a row to the table.
mysql> create database frodo;
Query OK, 1 row affected (0.02 sec)
mysql> use frodo;
Database changed
mysql> create table hits(id integer, num_hits integer);
Query OK, 0 rows affected (0.06 sec)
mysql> insert into hits values(1, 0);
Query OK, 1 row affected (0.02 sec)
Connecting to the database
Let's change the application code to persist the number of hits and return the number of hits in the response. We will use a Python library called peewee
which allows connecting with the mysql database and issuing ORM commands. We will rely on peewee because we want to avoid writing raw SQL queries.
from flask import Flask
from peewee import MySQLDatabase, IntegerField, Model
db = MySQLDatabase('frodo', user='root', password='secret', host='database', port=3306)
class Hits(Model):
num_hits = IntegerField()
class Meta:
database = db
app = Flask(__name__)
@app.route('/')
def hello_world():
hit_instance = Hits.select().get()
hit_instance.num_hits += 1
hit_instance.save()
return 'Hits: {}'.format(hit_instance.num_hits,)
Code explanation
We created an instance of peewee.MySQLDatabase
. This represents a connection to the database. Notice how we used database
as the host
argument instead of an ip address. Since the web application and database are going to be on the same Docker network, hence one container can refer to the other container by the container's name.
Peewee is an ORM and hence needs a corresponding Python class for each database table. Since our table name is hits
, hence we created Python class Hits
which maps to this table.
Also we need to add pewee
and pymysql
to requirements.txt file. requirements.txt
file should look like the following:
Flask
pymysql
peewee
cryptography
Since we changed the application code, we would have to rebuild the image.
Stop and remove the existing application container.
$ docker stop web-application
$ docker rm web-application
Let's rebuild the image.
$ docker build -t flask-application .
We will run the web application using the earlier command. In addition we would add --network application
to our earlier docker run
command.
This is required because web application and database need to be on the same Docker network for the web application to be able to connect to the database.
$ docker run --env FLASK_APP=hello.py -p 5000:5000 --network application --name web-application flask-application
Navigate to http://localhost:5000
on your host machine. You should be greeted with Hits: 1
. Refresh the page multiple times. Every time you refresh the page you should see the hits increase by 1.
There is a problem with this approach though. Once the mysql container is removed, all the database changes would be lost, and we would have to recreate the database and table on any new container we setup.
This is not ideal. Say a newer version of the database is released which fixes a security vulnerability, in such case you would prefer to use the Docker image with newer version. But since the data is internal to the container hence stopping the container would loose the data.
To see it in action, stop and remove the existing mysql container.
$ docker stop database
$ docker rm database
Start another mysql container.
$ docker run --env MYSQL_ROOT_PASSWORD=secret --network application --name database mysql
Refresh the page http://localhost:5000
. You would get an Internal Server Error
. Check the logs on the web application container terminal. The error would be:
peewee.OperationalError: (1049, "Unknown database 'frodo'")
This happened because the data was stored on the container and since we removed the container so it's data was lost too. And the new container doesn't have a database named frodo
yet.
The correct approach would be to use a Docker named volume. Docker volumes provide an ability to store the container data on the host machine rather than the container.
Hence the data can exist outside the lifecycle of the container.
The internals of physical location of this data is managed by Docker. Docker provides a very standardised and simple interface while encapsulating the implementation details.
Let's remove the database
container and follow the correct approach of assigning a Docker volume to this container. This would ensure that the data would remain intact even if we remove the container. And then we can assign the Docker volume to any new container we create.
$ docker stop database
$ docker rm database
First we need to create a Docker volume.
$ docker volume create database-volume
MySQL stores the data in the location /var/lib/mysql
. Hence we need to tell Docker that the data at location /var/lib/mysql of the container needs to be stored in the Docker volume database-volume
.
It is done by using flag -v
or --volume
while starting the container.
Let's see it in action
$ docker run --env MYSQL_ROOT_PASSWORD=secret --network application --name database --volume database-volume:/var/lib/mysql mysql
Let's issue the SQL statements again to ensure the database and the table is created.
$ docker exec -it database /bin/sh
$ mysql -p
mysql> create database frodo;
Query OK, 1 row affected (0.02 sec)
mysql> use frodo;
Database changed
mysql> create table hits(id integer, num_hits integer);
Query OK, 0 rows affected (0.06 sec)
mysql> insert into hits values(1, 0);
Query OK, 1 row affected (0.02 sec)
Refresh http://localhost:5000
. You should see the number of hits increasing with every refresh.
Let's remove the database container and create a new container to verify if docker volume is working as expected and if number of hits persist even after container deletion.
$ docker stop database
$ docker rm database
Start another mysql container.
$ docker run --env MYSQL_ROOT_PASSWORD=secret --network application --name database --volume database-volume:/var/lib/mysql mysql
Refresh the page. You would notice that number of hits stay intact and are incremented from the last value.
This confirms how Docker volumes can be used to persist data outside of container lifecycle.
Docker compose
With current approach we would have to share both the docker commands, i.e docker command for web application and the docker command for database, with other developers for them to run the application in a Docker setup.
Consider the scenario where there are more services involved, say Redis, Rabbitmq etc. This approach would very soon become unmanageable.
Ideally there should be a single command to get this entire infrastructure up. That's where Docker compose comes into picture.
Docker compose is a single file similar to Dockerfile. It is written in yaml.
Let's see the docker-compose.yml for our current setup.
version: "3.7"
services:
web-application:
build: .
environment:
FLASK_APP: hello.py
ports:
- "5000:5000"
database:
image: mysql
volumes:
- database-volume:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
database-volume:
external: true
The docker compose file needs to have the following sections:
- version: The latest version supported by Docker is 3.7.
- services: This lists all the services of the application
- Each
service
has it's own configuration.
Since we currently need two services in our application, i.e the web service and a database service, hence we created the corresponding services in the Docker compose file. The configuration pieces needed for each service are provided as key value pairs to that service.
Example: On the web container, we want to expose port 5000 of container on port 5000 of host. Hence we have the ports
key on service web-application
.
Also, for web container, we first need an image to be built. Hence we have used the build
key with web-application
. Since we don't need an image to be built for our database service, so no build
key has been used with database
and instead image
has been used.
We plan to persist data in a volume and not in a container, hence have used volumes
key with service database
. Since we already have data in a Docker volume and don't want a new volume to be generated for the database container, hence we have used external
with the volume database-volume
.
You would also notice that no Docker network
had to be created or used in the compose file. Compose has an additional advantage that all compose services run on a single network. So we didn't have to explicitly create and assign network to the services.
Before we start containers using Docker compose, let's stop and remove the existing containers.
$ docker stop web-application
$ docker rm web-application
$ docker stop database
$ docker rm database
Let's issue command docker compose up
to get the entire infrastructure running.
$ docker compose up
The output would have several lines like
Creating network "flask-application_default" with the default driver
Building web-application
WARNING: Image for service web-application was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating flask-application_web-application_1 ... done
Creating flask-application_database_1 ... done
As the output suggests, a Docker network was created. It would be named default
with the current directory named prepended to it. As my directory is flask-application
, hence network name is flask-application_default
.
As no image exists with the service name, i.e web-application
, hence an image with this name is also created. Also directory name is prepended to it, hence actual name for image would be flask-application_web-application
.
And then two containers named flask-application_web-application_1
and flask-application_database_1
were created.
You should be able to access http://localhost:5000
and see hits count going up on every refresh.
We can stop and bring down the entire infrastructure by doing docker compose down
.
$ docker compose down
This would stop as well as remove the running containers. However, it would leave the image intact so image flask-application_web-application
should still be available. Docker does it to avoid recreating the image on next run of docker compose up
.
Override container and image name
Docker compose provides ability to control the name for containers. We want the container name to be web-application
rather than flask-application_web-application_1
. This needs adding a key called container_name
to service web-application
.
This change would make docker-compose.yml
look like:
version: "3.7"
services:
web-application:
build: .
environment:
FLASK_APP: hello.py
ports:
- "5000:5000"
container_name: web-application
database:
image: mysql
volumes:
- database-volume:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
database-volume:
external: true
With this change, doing a docker compose up
should create a container named web-application
rather than flask-application_web-application_1
.
Compose also gives ability to control the image name. We can name the image as flask-application
rather than the default flask-application_web-application
, by using key image
with the service. This will make service web-application
look like:
web-application:
build: .
environment:
FLASK_APP: hello.py
ports:
- "5000:5000"
image: flask-application
container_name: web-application
Try docker compose up
again and you would notice that existing image flask-application_web-application
wouldn't be used, and instead a new image called flask-application
would be created. And the container for service web-application
would be created using this new image.
Environment variables in docker compose
We do not need to hard-code everything in docker compose file. It can also be provided values from environment variables.
Assume we want to have two deployments of this application on the same server, one for a staging environment and another for user acceptance testing, i.e uat, environment.
Say, we bind port 5000 of the host to container's port 5000 for staging environment. Hence, we cannot bind the host's port 5000 to container port 5000 for uat environment, and will need to use another port.
This is a situation where environment variables can help.
Create a file called docker-compose-staging.env
in the current directory with following contents:
WEB_APPLICATION_PORT_MAPPING=5000:5000
Change the ports
key of docker-compose.yml
to contain ${WEB_APPLICATION_PORT_MAPPING}
. The contents of docker-compose.yml
would look like:
version: "3.7"
services:
web-application:
build: .
environment:
FLASK_APP: hello.py
ports:
- ${WEB_APPLICATION_PORT_MAPPING}
image: flask-application
container_name: web-application
database:
image: mysql
volumes:
- database-volume:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
database-volume:
external: true
Thus we direct Docker compose to use environment variable WEB_APPLICATION_PORT_MAPPING
to decide the port mapping. We also need to use this env file while running docker compose up
, and that's done by passing --env-file
.
$ docker-compose --env-file docker-compose.staging.env up
Verify that things work as expected on http://localhost:5000
.
Let's mimic having a uat
environment. Create another folder called flask-application-uat
and copy the code from directory flask-application
.
$ mkdir flask-application-uat
$ cp -r flask-application-verify/ flask-application-uat
$ cd flask-application-uat
Create another file called docker-compose.uat.env
with following contents:
WEB_APPLICATION_PORT_MAPPING=5001:5000
So, we would use host's machine port 5001 for running uat.
Let's do a docker compose up
for this environment.
$ docker-compose --env-file docker-compose.uat.env up
This would lead to following error:
ERROR: for web-application Cannot create container for service web-application: Conflict. The container name "/web-application" is already in use by container "b10d6b3b92fd351a6c0cbf2d458319d308b3ef4bc8b
Creating flask-application-uat_database_1 ... done
ERROR: for web-application Cannot create container for service web-application: Conflict. The container name "/web-application" is already in use by container "b10d6b3b92fd351a6c0cbf2d458319d308b3ef4bc8bc8366aa45124129b45927". You have to remove (or rename) that container to be able to reuse that name.
ERROR: Encountered errors while bringing up the project
Docker compose inheritance
Previous command failed because we are trying to name the container as web-application
for uat environment too, while staging environment has already taken up that container name. Hence, we need to set a different container name for uat environment.
We want every other piece of Docker compose file to remain as it is, and just want to change the container_name
. The answer for that is inheritance.
Let's create a file called docker-compose.override.yml
with following contents:
version: "3.7"
services:
web-application:
container_name: web-application-uat
So we have overridden the container_name
as web-application-uat
in this override file. When we do docker compose up
, docker will read this file in addition to docker-compose.yml
and will merge the contents of both and hence will treat the container_name as web-application-uat
.
Let's run the command again.
$ docker-compose --env-file docker-compose.uat.env up
And with this the uat environment should be up too. This demonstrated how environment file and compose override file can come in handy in customising the Docker setup for different environments.
Stay tuned for more posts on Docker!
The Essential Plug
UrbanPiper is a growing team and we are looking for smart engineers to work with us on problems in the sphere of data, scale and high-throughput integrations.
If you are interested to be a part of our team, feel free to write in to us at [email protected]