Migrate local database on Docker container to AWS

Francesco Cuilla

Sep 8, 2021

Migrate local database on Docker container to AWS

How to migrate your local Postgres Database do AWS RDS (Relational Database Service)

In this article, we’ll see how to migrate your local Postgres database to AWS RDS.

Video Version:https://youtu.be/87G3iUl-tj0

Just a couple of definitions before we start:

Postgres (PostgreSQL): a free open-source relational database very popular and stable.

Amazon Relational Database Service (RDS): service to set up, operate, and scale a relational database in the cloud. Amazon RDS supports several database instance types:

  • Amazon Aurora
  • PostgreSQL
  • MySQL
  • MariaDB
  • Oracle Database
  • SQL Server

In this article, we’ll see how to migrate our local Postgres database using the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS.

Video version:

Usually what we want is a local database to test in the initial phase. But then, after a while, we want to bring it to the cloud - for example, onto AWS.

This is the full procedure and it's not strictly related to Docker (though it’ll help to have Docker installed locally on your machine):

  1. Clone the repository
  2. Run the local instance of Postgres using docker compose
  3. Make some inserts in the database using Postman
  4. Make a backup of the database using Docker and psql
  5. Create an instance of RDS Postgres in AWS
  6. Restore the Postgres database using the AWS CLI
  7. Final check

Clone the repository

To do this, we will use a project that we used in a previous article. (Video version)

This repository is based on the official TinyStacks AWS Docker Express application repository on GitHubhttps://github.com/tinystacks/aws-docker-templates-express.

To clone the repository, open a command prompt and type:

And then change folders:

Open this folder with your favorite IDE. If you have Visual Studio Code you can use the command:

Run the services using docker compose

Before starting our services, let's take a look at the docker-compose.yml file we used in the previous article. (If you want a more detailed explanation about these commands, you can check out my previous video.)

We have 2 services, and the volume called pgdata1, where we will store our data.

Let's run the database service with this command:

And then the node backend service:

Let's check the running containers with this command:


Now we’re ready to make our inserts into the database.

Database Inserts Using Postman

You can make the inserts in the database in different ways. In this article, we will use Postman, a tool to test API.

Check if the backend is up and running by creating a new [GET] request at the /ping endpoint, port 8080:


Check the current users bycreating a new [GET] request at the /users endpoint, port 8080. The response will be a blank response body ([]) with status 200, which is what we expect:


Create three new users on this database witha new [POST] request at the /users endpoint, port 8080:




If we check the users again via the /users, we will see the three new users.


Now we are ready to create the backup of the database using the pg_dump command.

Database Dump with pg_dump

pg_dump is a utility for backing up a Postgres database. It makes consistent backups even if the database is being used concurrently.

pg_dump does not block other users accessing the database.

The good news is that we can use docker exec to dump the database:

Run this command:

That was easy!

We are now ready to move on the AWS UI to create an RDS instance.

Create RDS Instance on AWS (UI)

Let's create the AWS RDS instance.

On the AWS UI console, search for RDS:


Click on Create Database:


Here, unsurprisingly, you can choose a database. Choose Postgres, version 12.5 (the same used locally and it has also a Free tier on AWS, which is nice!) and Free Tier.


In the Settings section, you can choose a name for the database, a username, and a password:


Under Connectivity, select Yes for Public Access (it’s set to No by default) so we can access to it from our local AWS CLI.

You also need a VPC with Security group open to the public. To learn more about security groups, check out the AWS documentation.


Leave everything else as it is, and click Create Database. (Please notice that since we are on a free tier, we will not be charged.)


Database create will usually take a couple of minutes.


if you click on the database name, you will have a summary of it:


Now that we have the running postgres instance running on AWS, we can use the psql command to restore the database.

Database Migration using Docker and psql

First of all, let's make a connection test to the aws Postgres database:


To check if it's working we can type the following command command (don't forget the ; at the end!):


This means that the remote postgres instance is accessible from our local machine!

Let's exit the current psql process with the command:

Now, for ease of use, navigate into the folder where the backup.sql file is located. In our case, it was the same folder where we installed our service:


Database Migration

Now comes the interesting and most important part - the migration to AWS:

to do this,


Final Test

Now let's get back to our remote instance:


There it is! We have migrated our local database onto an RDS instance!

Video Version:https://youtu.be/87G3iUl-tj0

If you want to know more and be updated on TinyStacks news, check out our Web site!


© 2022 TinyStacks, Inc. All rights reserved.