Skip to content

RootDBA

DBA expertise: PostgreSQL, SQL Server & MySQL

Menu
  • Home
  • blog
  • PostgreSql
  • MySql
  • Sql Server
  • About AHMED – Root DBA
Menu

pgloader for migrating for postgresql

Posted on September 1, 2025September 1, 2025 by AHMED

I had a scenario where the data team needed all tables from different databases to be combined into a single database for reporting and analysis using a tool like Metabase. Initially, I faced challenges trying to restore tables into a single database manually, which led to several issues. However, I discovered pgloader, a tool typically used for migrating databases between different DBMS types. Today, I utilized pgloader to migrate all tables from various databases into a single consolidated database, and it worked perfectly, simplifying the process significantly.

How to install pgloader

installation is quit easy just run the below command 

sudo apt update sudo apt install pgloader

how to use pgloader

Using pgloader is quite straightforward for my scenario. I simply need to specify the source and destination databases. Below is the general syntax for pgloader:

pgloader <source-database-connection> <target-database-connection>

Example 

pgloader postgresql://<source-user>@<source-host>/<source-database> postgresql://<target-user>@<target-host>/<target-database>

If you’re using default local connections and running as the postgres user, you can simplify it:

pgloader postgresql:///source_database postgresql:///target_database

I have downloaded a sample database called chinook, and I want to migrate all its tables to a database called unified. Since both databases reside on the same server, I will switch to the postgres user and use pgloader to perform the migration. This approach simplifies the process as it allows me to leverage pgloader without needing to specify additional connection details.

pgloader postgresql:///chinook postgresql:///unified

from output you can see there is no error and all tabl;es have been migrated to unified database 

lets check the tables if there are moved unified database 

We can confirm that all tables have been successfully moved. The same steps can be applied to the rest of the databases. Additionally, we can create a script to automate this process daily, ensuring that the tables are regularly updated. This approach allows the data team to seamlessly view all tables from the production server in a single consolidated database, making it ideal for use with Metabase for reporting and analysis.

Pasted image 20241220155850.png
By combining this process with streaming replication, you can create a robust and efficient solution for reporting, ensuring real-time or near-real-time data availability. However, if you prefer not to use replication, you can still use pgloader to migrate tables to a remote PostgreSQL server. This provides flexibility in consolidating data for reporting and analysis while keeping the implementation straightforward and adaptable to your specific requirements.

migrating data to remote PostgreSQL server

we have destination PostgreSQL 10.10.10.40 and we want to migrate database chinook to destination unified
simply we will uses the below command 

pgloader postgresql:///chinook postgresql://[email protected]/unified

you can automate it by making script nd then add it to crontab to run automatcly

#!/bin/bash # Define the list of source databases SOURCE_DATABASES=(“chinook1” “chinook2” “chinook3″) # Define the target database connection TARGET_DB=”postgresql://[email protected]/unified” # Log file for the migration process LOG_FILE=”/var/log/pgloader_migration.log” # Ensure the log file directory exists mkdir -p “$(dirname “$LOG_FILE”)” # Start logging echo “Starting migration at $(date)” >> “$LOG_FILE” # Loop through each source database and migrate for SOURCE_DB in “${SOURCE_DATABASES[@]}”; do echo “Migrating $SOURCE_DB to $TARGET_DB…” >> “$LOG_FILE” pgloader postgresql:///$SOURCE_DB $TARGET_DB >> “$LOG_FILE” 2>&1 if [ $? -eq 0 ]; then echo “Successfully migrated $SOURCE_DB to $TARGET_DB” >> “$LOG_FILE” else echo “Failed to migrate $SOURCE_DB to $TARGET_DB” >> “$LOG_FILE” fi done # Finish logging echo “Migration process completed at $(date)” >> “$LOG_FILE”

test script and make sure you add execute permission using chmod
the add it to schedule using contab -e 

0
Visited 38 times, 1 visit(s) today
Category: PostgreSql, Postgresql Dba Guide

Post navigation

← automate patroni deployment using partoni autobase
Performance Tuning with pg_profile and pgbench →
© 2025 RootDBA | Powered by Minimalist Blog WordPress Theme