Using Docker Compose for Development Environment Dependencies

Have you ever configured a local PostgreSQL database for a development project? How about multiple projects at the same time? Having particular versions of databases and other helper services for different projects is a common scenario, and can be solved using Docker containers quite elegantly. Let’s look at how to do this for a simple Flask application.

This is the third article in a series about building a real-life Flask web application. By now, we got our project structure and a basic hello-world app. We’d like it to do something useful, as planned originally.

We’ll want to have sessions and data in a database - for this we want places to store the stuff. Let’s go with Redis and PostgreSQL as our choice of temporary storage and relational db. We will focus on the needs of a development environment for now this setup is not meant for a production setting.

So, Why Docker?

I prefer containerized services being used by a single project, because it feels cleaner. If you work on several services, each with their own set of dependencies that need to be present in a particular version, having containers can be simpler than maintaining several versions of a database on your machine.

Another big upside of using Docker, is that it makes it relatively simple for a new person to get a dev environment running without putting much work into finding out how to install and configure services on their OS.

Especially if you’re not using a developer friendly OS. A word of caution: it can be a pain on OSX though. I always ran into pains the last few times around I gave it a go.

Running Multiple Containers - Docker Compose

Docker Compose is a great tool for starting multiple containers, specifying stuff like custom port forwarding to the local machine, volumes to store data and passing environment variables. It’s the spiritual successor of fig, an earlier tool used for the same thing, with almost the same syntax.

As it’s using Docker, there’s plenty of cool container images which are more than suitable for providing various containerized services. As it’s the dev environment, we don’t need to worry about them being reliable and maintenance tasks. If something changes, we can just start them from scratch.

Setting Up

We start by installing docker-compose. It’s a Python module, and can be installed system wide. Here’s how it looks on Ubuntu, given that you have python-pip in place:

$ sudo pip install docker-compose

I prefer not being in a particular virtualenv for this kind of tools.

To make docker-compose do something useful, you’ll need to have Docker installed on your system. Also, your current user should be able to execute docker commands, otherwise you’ll need sudo powers for any container operation to take place.

The Configuration

Docker-compose looks for a docker-compose.yml file in the current working directory upon being executed.

Here is the content of the docker-compose.yml file we’re going to use for this project:

version: '2'

volumes:
  # for persistence between restarts
  postgres_data: {}

services:
  db:
    image: postgres:9.6.3
    volumes:
      - postgres_data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: dbpw
    ports: #make db accessible locally
      - "127.0.0.1:5432:5432"
  redis:
    image: redis:3.2.9
    ports: #make redis accessible locally
      - "127.0.0.1:6379:6379"

The idea is, to use docker containers for development dependencies, and have a local non-containerized development server access them. Let’s go through the configuration step by step.

The very first line refers to version docker-compose configuration we’re talking in. There’s a version 3 by now, but I chose to stick to 2. There are two sections: defining volumes we want to create for storing data and the services: configuration and images from which containers will be created.

About Volumes

Docker volumes can be mounted on containers and are persistent inbetween container rebuilds/reruns.

Abstracting the details away, they provide a place to save data which is not wiped as easily. An alternative might be, to mount container directories to local folders.

We’re defining a volume called “postgres_data” and later telling the postgres container to use it as the /var/lib/postgresql/data directory. That’s where the database data is being saved by default in this particular image.

To delete a volume and the data contained within, you just need to tell docker-compose to stop and take the containers with it:

$ docker-compose down -v

The -v stands for ‘volumes’.

Services

We’re interested in PostgreSQL ans Redis, each in a very particular version (the stuff after the colon).

We are using Images which are available on Docker Hub. They are properly maintained (if official) and come in many versions and forms. That’s it! The ‘db’ and ‘redis’ names are chosen at will, and might be used for maintenance commands or as the hostname to establish connections between containers in a single docker-compose stack.

Let’s look into the single configuration blocks of those services in detail.

Environment Variables

In this case, we’re providing env vars right in the config, which is completely acceptable for a development setup. It’s only the database credentials, which are equal to the ones being set in the environment variables for the app.

If we wanted something secret, or to use variables in the template to save typing, we’d use a .env file in the same folder as docker-compose. It would be read automatically. Writing lines like

    environment:
      POSTGRES_USER: ${POSTGRES_USER}

would be possible. The dollar notation is not only for the env blocks. This makes docker-compose.yml a template-able file and passing variables on without leaking secrets.

Ports

In the case of Redis, we’re making the 6379 port (right side) accessible on 127.0.0.1:6379 on the host machine. So only locally, on the port you’d expect. If we had multiple Redis services for different project which needed to run in separation and at the same time, we’d choose non-conflicting ports for each.

Take a look at the docs for more explanations and options.

Running it

From scratch, we’d just need to tell docker-compose to up our stuff:

$ docker-compose up -d

It will pull and start the containers in the background. If we’re interested in the output, we’d run it without the -d argument. That would block the terminal tab though, but also give direct access to live outputs.

If I’m unsure if something will run, I usually execute the containers without -d for the first time, take a look at the output, ctrl + c them and re-run with -d.

A better way might be to run it with -d and attach to the output with tailing enabled:

  docker-compose logs -f

To stop the containers, you’d simply

  docker-compose down

if you also want to remove the volume data, add a -v parameter and you’re good.

In Conclusion

Using docker-compose is pretty convenient if you need temporary, reproducible services for your development environments. I really prefer them to the option of using OS-provided packages in most cases. Running multiple pre-packaged apps which are able to talk to each other, are sure to be configured correctly and can be accessed by each other and the host are everything you need to get started.

Data can be made persistent-enough for dev purposes (between container and host restarts), so you don’t need to recreate your environment more often than you want to.

Next up, we’re going to iterate on the Python Flask app, and make it do something useful - talk to the Spotify API, interact with a database and handle sessions using our Dockerized dev setup. If you want to join in on the fun, and be notified about the next post, just drop me your favorite email address below and I’ll be in touch!

Subscribe to my newsletter!
You'll get notified via e-mail when new articles are published. I mostly write about Docker, Kubernetes, automation and building stuff on the web. Sometimes other topics sneak in as well.

Your e-mail address will be used to send out summary emails about new articles, at most weekly. You can unsubscribe from the newsletter at any time.

Für den Versand unserer Newsletter nutzen wir rapidmail. Mit Ihrer Anmeldung stimmen Sie zu, dass die eingegebenen Daten an rapidmail übermittelt werden. Beachten Sie bitte auch die AGB und Datenschutzbestimmungen .

vsupalov.com

© 2024 vsupalov.com. All rights reserved.