Video Version: https://youtu.be/qczbRQtmCDo
In our most recent articles on Docker, we looked at standing up a basic CRUD app with Docker and using storage in Docker containers. In this article, we will be using Docker Compose for the first time.
Docker Compose is a very powerful tool that’s used to manage multiple containers, called services, with a single file.
A key concept to understand is that, when we say Docker Compose, we must distinguish between the
docker-compose.ymlfile (which will look like the picture below) and the
docker compose CLI, the set of commands we can type directly at the command prompt.
In this article we will use the latest version of the docker compose CLI -
docker compose (without the dash). It’s written in the Go programming language and is perfectly compatible with the previous Python version, called
docker-compose (with a dash). They both work and, for this tutorial, it makes no difference which one you use!
So Docker Compose is used for managing multiple containers. Do I have to have many services to use Docker Compose? No, actually - it can also be used with a single service. In this example, we will start from the TinyStacks Express repository and add what we need to make our application communicate with a database - in this case, Postgres.
First of all, to check if Docker is up and running and see some available commands, open a command prompt and type:
You should get something like this:
These are all the basic commands available for docker compose. Don’t worry, you don't have to memorize them all.
Now, let's clone the public Tinystacks repository:
Get into the directory:
Open the folder with your favorite IDE. If you are using Visual Studio Code, you can type at the prompt:
Install the Dependencies
We need some more dependencies for this demo:
- (pg)[https://www.npmjs.com/package/pg], which is the NPM package to connect Postgres.
- (sequelize)[https://sequelize.org/], which is an Object Relational Mapping (ORM) tool. We could use another ORM package or even not use an ORM at all. The ORM is helpful because it’ll help us automatically create the table and make inserts, updates and deletes without writing SQL commands.
Since we are using Typescript, we can optionally install sequelize types for Typescript:
Edit the Current Repository
You can edit the current repository according to your needs! Let’s step through an example.
In the src folder, create a new folder called .
util.Navigate into this util folder and create a new file called
At this point, your folder structure should look like this:
This file is needed to configure the database connection between our Node.js backend and the Postgres container.
database.ts file with this command:
process.env.XYZ (where XYZ is our variable name) is how Node.js stores and reads environment variables. Here, we have four:
PGDATABASE: The database to connect to. This will be created by Postgres as soon as we start the Postgres container.
PGUSER: The default database user.
PGPASSWORD: The default user’s password.
PGHOST: IMPORTANT! This is how the Node.js application will find the Postgres container. We will see that Docker finds the containers by the container name.
Now let's create a template for our database to store information about our users. We will create a specific file, inside a
models folder, which will be read by Sequelize to create a table and perform all the necessary SQL queries. Again, this is not strictly necessary, but it’s very convenient to focus on the important part that will come soon: Docker Compose.
In the src folder, let's create a folder called
models. Inside this folder, let's create a file called users.ts`.
Let's create our model with these fields:
Here we define a new
users table. This will be created by Sequelize as soon as we synchronize it.
src folder, let's create a new controller file to handle the different API calls. Let's call it
local-user.ts, to be consistent with the existing
For this to work, we must import:
- Types for HTTP requests in typescript
- The headers for the responses
- The database configuration (the
database.tsfile inside the
- The user's model, by importing the
users.tsfile into the model folder
Let's add these 4 lines to the beginning of the file:
We will create five endpoints, in order, using the appropriate HTTP request methods to implement each:
createOne: A POST request to create a new user. The POST body contains the JSON with the necessary parameters. Note that we do not enter the user id; Postgresenters this automatically as an auto-increment. In case of success, we return the object and 201 HTTP status code; in case of error, we return a 500 status.
getAllGET request that selects all the users in the table and returns them as a JSON array. To show them to the user, we use the
mapmethod and insert all the values in the element
dataValues. If there’s an error, we return a 500 status.
getOneGET request to return a single user. We use the id parameter to perform a search using the
findByPk(find by primary key) method.If the user exists, it is returned as JSON. If there is an error, we return a 500 status. If the user does not exist, we return an empty object with a 200 status, because it isn’t an error with the database but simply a user who does not exist. This could be handled better for sure!
updateOnePUT request to modify an already existing user. This is for demonstration purposes only; proper error handling here would be quite complex and is dependent on many factors. The base case shown here requires a PUT body with the new parameters and the id in the URL. In case of a malformed request we return an HTTP 400 error in case of error, we return 500; and in case of correct modification we return a 200 status and a response body of
1, which means that one (1) row has been modified in the database.
DELETE request to delete an existing user. Just pass the id in the URL and the user’s record will be removed from the database.
This is a good start. But it’s not enough.
We’ve created the five functions. Next, we have to connect these controllers to some URL paths (called routes in REST API parlance) and then import them. To do this, we need to edit the
Let's add these three lines to the import section:
Then, add the new routes:
Let’s comment these 2 lines:
Finally, we have to synchronize sequelize before launching our application to create the user table.
Do you remember in the previous article when we had to write those long commands?
What if I told you that can be avoided by writing the commands in a more declarative way to a file? That would be great, right?
Well, Dockerompose does exactly that. Let's see a practical example now.
First of all, let's create a
Let’s examine this line by line.
The first line is the Docker Compose version and must be specified.
Then there are services, a synonym for containers. We will have two containe - oops, sorry, services:
The first will, not surprisingly, contain our Node.js backend application, and we can also specify:
container_name: Used to define the name of the container when we run our backend application.
image: The image to use or build (if not present).
build: Defines some parameters to build the image directly, without using the docker build command! This is very convenient as we will see shortly.
ports: to define the ports to be published, in this case, 8080 externally and 80 internally
environment: Here we can define some environment variables - the configuration for our application. We define four variables that are used in the
database.tsfile inside the
utilfolder in our application.
depends_on: Manages dependencies between containers. In this case, we want the container running our database to start before our application container. (The database service is simply called db because I’m lazy.)
Let me explain the db container definition line by line as well.
container_name: the name of the container. This is very important! Do you see the name of the PGHOST variable? This is how the connection between the two containers takes place!
image: ‘postgres: 12’: Here we don’t use our custom image (although we could) but simply the prebuilt image for \Postgres on Docker Hub. That means we do not specify a build entry here.
ports: We use the standard one for Postgres, 5432, for both external and internal ports.
environment: The environment variables here are those suggested by the official Postgres documentation: user, password, and database.
volume: pgdata1. This is for persistence. To get an introduction to volumes, you can see our previous article. Here, we are using the
pgdata1volume, which is defined below in the
docker compose up
You might think that, to run all the containers, we should do something like
docker run.But there’s a better way:
To check if the db is up and running, you can type the following (replace the parameter to -U with your username):
but you can type
To exit the Postgres container, to run our Node.js Applcation
and to run our node service, we can type:
Now let's test our endpoints. First of all, let's ensure the application is up and running:
To check all the users, make a GET request:
And of course, it's empty now. So let's create a new user:
Let's try to get all the users again:
As you can see , now we have an array of one element.
Let's create a couple more:
To get a single user, we can add the id of the user at the end of the URL:
To update an existing user, we can make a PUT request, adding the id of the user at the end of the URL and attaching a request bod:
Here you can see the updated user:
To delete a user, we can make a DELETE request, adding the id of the user. If our call returns a 200 response code, that means that the user had been deleted
Iif we try to get all the users again, we can see that the one with id 3 is no longer there:
Now that you're familiar with Docker Compose and an Express App the next step is deploying. Check out the TinyStacks docs on deploying app to AWS. You stay focused on your apps, we'll take care of configuring AWS for security, networking, auto-scaling, pipelines, and stages for your team. If you have any comments or questions, please write them below!
Video Version: https://youtu.be/qczbRQtmCDo