Victor is a full stack software engineer who loves travelling and building things. Most recently created Ewolo, a cross-platform workout logger.
Dockerized posgresql with local data storage on Ubuntu

Docker has given developers the ability to easily switch between project setups. In this tutorial, we will look at creating a docker postgresql container and use a local folder to store data within it. The tutorial is written for Ubuntu but the commands can just as easily be modified for other operating systems.

Pre-requisites

This article assumes that you have the following installed:

Creating a docker container using a pre-built postgresql image is fairly straightforward. The tricky bit comes into play when we want to use a separate location to store the actual database data. Note that by default, postgresql stores it's data under /var/lib/postgresql/data for an ubuntu installation.

We thus have 2 options as provided by docker.

Use a local docker volume

This is the recommended "docker" way where we first create a named volume via the following:


docker volume create postgres-data  

We then start our docker container and mount the created volume via the following command:


docker run --name local-postgres9.6.7 -p 5432:5432 -e POSTGRES_PASSWORD=ppp --mount source=postgres-data,destination=/var/lib/postgresql/data postgres:9.6.7-alpine

Note that in the above case the volume is most likely created under /var/lib/docker/volumes/postgres-data/ and the actual db contents would be found under /var/lib/docker/volumes/postgres-data/_data. This information can be verified by running the following commands and paying close attention to the Mounts section:


docker volume inspect postgres-data
docker inspect local-postgres9.6.7
Using a local folder mount

Another option of using a local folder is simply mounting the local folder into the desired location in the container. This can be achieved very easily via the following command:


docker run --name local-postgres9.6.7 -p 5432:5432 -e POSTGRES_PASSWORD=usesomethingbetter --mount type=bind,source=/home/victor/npq/docker/postgres-data,target=/var/lib/postgresql/data postgres:9.6.7-alpine

Note that after mounting the local folder, it's ownership gets changed to a different user and you would need to sudo chown -R user:group /home/user/docker/postgres-data to be able to access it again. The ownership always changes everytime the container is started.

Connecting to the docker instance

In the above commands we created a container with the following attributes:

  • name local-postgres9.6.7
  • expose port 5432
  • set environment variable (also the postgres user password) usesomethingbetter
  • mounted a volume/local folder to /var/lib/postgresql/data

Connecting to the running database is thus as simple as:


psql -h localhost -U postgres

postgres=# CREATE ROLE dummy WITH CREATEDB LOGIN PASSWORD 'dummy';
CREATE ROLE
postgres=# CREATE DATABASE dummy;
CREATE DATABASE
postgres=# \du
                                    List of roles
  Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
  dummy     | Create DB                                                  | {}
  postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=# \q
Container management

Starting and stopping the container is as easy as:


docker stop local-postgres9.6.7
docker start local-postgres9.6.7 -a -i

Voilá, we now have a throwaway postgres db which we can spin up depending on the project configuration and have the data also saved across multiple runs and even containers as required.

Listing and deleting containers and volumes can be done via the following:


docker container ls --all
docker container prune
docker volume ls
docker volume prune
docker-compose (Updated Nov 3, 2020)

A final approach to setting up a local postgres database is via docker-compose which essentially allows you to configure different docker containers together in a single file. Most of the time, you're interested in docker-compose when you have multiple containers (e.g. database, queue) that depend upon each other and you need to configure how they interact with one another. In the following example however, we will just look at a sample docker-compose.yml file that is just for 1 container, our postgres database:

version: "3.3"
services:
  db:
    image: postgres:12
    restart: unless-stopped
    environment:
      POSTGRES_DB: "postgres"
      POSTGRES_USER: "postgres"
      POSTGRES_PASSWORD: "supersecretpassword"
    ports:
      # Port exposed : Port running inside container
      - "5432:5432"
    volumes:
      - radmon-db:/var/lib/postgresql/data

volumes:
  radmon-db:

As can be seen in the above configuration, we have setup a docker container that uses the postgres:12 image and configures the ports, a volume with the data being stored under /var/lib/docker/volumes/<project-name>_<volume-name> (on a linux based system). Finally, also note that we have setup our container to always run unless stopped manually via Ctrl+c. Check the documentation for further options.

Starting the services is as simple as running docker-compose up in the folder where the compose file is located (generally in the project root).

References and further reading

In the next article we will look at using docker compose to achieve something similar but with more services and applications all rolled up for a single project setup! As always please, feel free to get in touch for any questions/comments :).