Skip to main content

How to migrate data from on-premise Postgres to Google Cloud

There are several ways to move data from an on-premise PostgreSQL database to Google Cloud. Here are three common approaches:

  1. Use a Cloud Data Integration Tool: Google Cloud offers several tools that can help you move data from an on-premise PostgreSQL database to the cloud. For example, Cloud Data Fusion is a fully-managed, cloud-native data integration platform that can help you build, execute, and monitor data pipelines between various data sources and destinations, including PostgreSQL and Google Cloud. You can use Cloud Data Fusion to extract data from your on-premise PostgreSQL database, transform the data as needed, and load the data into a cloud-based data store, such as BigQuery or Cloud SQL.
  2. Use a Command-Line Tool: Another option is to use a command-line tool, such as pg_dump or pg_dumpall, to extract the data from your on-premise PostgreSQL database and save it to a file. You can then use a tool such as gsutil to upload the file to Google Cloud Storage. Once the data is in Google Cloud Storage, you can use a tool such as bq load to load the data into BigQuery, or you can use Cloud SQL Import to import the data into Cloud SQL.
  3. Use the Cloud SQL API: If you want to automate the data transfer process, you can use the Cloud SQL API to programmatically export data from your on-premise PostgreSQL database and import it into Cloud SQL. To use the Cloud SQL API, you will need to set up a Cloud SQL instance, create a service account with the appropriate permissions, and install the Cloud SQL API client library for your programming language of choice. You can then use the API to export the data from your on-premise database and import it into Cloud SQL.

Each of these approaches has its own strengths and weaknesses, and the best approach for a given situation will depend on the specific requirements of your data migration.


Here is an example of how you can use the Cloud SQL API to export data from an on-premise PostgreSQL database and import it into Cloud SQL:



import google.auth
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
import googleapiclient.discovery
import googleapiclient.errors
import psycopg2

# Set the project, instance, and database IDs
project_id = "my-project"
instance_id = "my-instance"
database_id = "my-database"

# Connect to the on-premise PostgreSQL database
conn = psycopg2.connect(
    host="localhost",
    port=5432,
    user="user",
    password="password",
    dbname="database",
)

# Create a cursor
cur = conn.cursor()

# Execute a query to retrieve the data you want to export
cur.execute("SELECT * FROM my_table")

# Fetch the data
data = cur.fetchall()

# Close the cursor and connection
cur.close()
conn.close()

# Create a service account and credentials
service_account_info = {
  "type": "service_account",
  "project_id": project_id,
  "private_key_id": "PRIVATE_KEY_ID",
  "private_key": "PRIVATE_KEY",
  "client_email": "CLIENT_EMAIL",
  "client_id": "CLIENT_ID",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "CLIENT_X509_CERT_URL"
}
credentials = Credentials.from_service_account_info(service_account_info)

# Create a Cloud SQL client
client = googleapiclient.discovery.build(
    "sqladmin", "v1beta4", credentials=credentials
)

# Construct the export request
request = {
    "instance": f"projects/{project_id}/instances/{instance_id}",
    "database": database_id,
    "exportContext": {
        "fileType": "SQL",
        "uri": "gs://my-bucket/my-file.sql",
        "sqlExportOptions": {
            "tables": ["my_table"]
        }
    }
}

# Execute the export request
operation = client.databases().export(**request).execute()
print(f"Exporting data to {operation['exportContext']['uri']}")

# Wait for the export to complete
while True:
    result = client.operations().get(project=project_id, operation=operation["name"]).execute()
    if result["status"] == "DONE":
        if "error" in result:
            raise Exception(result["error"])
        print("Export

Comments

Popular posts from this blog

Building Scalable and Efficient Data Lakes with Apache Hudi

If you're looking to build a scalable and efficient data lake that can support both batch and real-time processing, Apache Hudi is a great tool to consider. In this blog post, we'll discuss what Apache Hudi is, how it works, and why it's a powerful tool for building data lakes. Apache Hudi is an open-source data management framework that provides several features to manage big data. It provides the ability to perform read and write operations on large datasets in real-time, while also supporting batch processing. With Hudi, you can also ensure data quality by performing data validation, data cleansing, and data profiling. One of the key advantages of Apache Hudi is its support for schema evolution. This means that as your data changes over time, Hudi can automatically update the schema of your data to accommodate these changes, without requiring any downtime or manual intervention. Another advantage of Hudi is its support for scalable and fault-tolerant data storage. Hudi p...

Top 25 Data Engineer Interview Questions

In my last post  How to prepare for Data Engineer Interviews ,  I wrote about how one can prepare for the Data Engineer Interviews, and in this blog post, I am going to provide the  Top 25 Basic   data engineer interview questions  asked frequently and their brief answers. This is typically the first round of the Interview where the interviewer just wants to access whether you are aware of basic concepts or not and therefore you don't need to explain it in detail. Just a single statement would be sufficient. Let's get started Checkout the 5 Key Skills Data Engineers need in 2023 A. Programming  1. What is the Static method in Python? Static methods are the methods that are bound to the  Class  rather than the Class's Object. Thus, it can be called without creating objects of the class. We can just call it using the reference of the class. Also, all the objects of the class share only one copy of the static method. 2. What is a Decorator in Python?...

How to prepare for the Data Engineering Interviews?

In recent years, due to the humongous growth of Data, almost all IT companies want to leverage the Data for their Businesses, and that's why the Data Engineering & Data Science opportunities in IT companies are increasing at a rapid rate, we can easily say that Data Engineers are currently at the top of the list of "most hired profiles" in the year 2021-22.  And due to huge demand companies wants to hire Data Engineers who are skilled in programming, SQL, are able to design and create scalable Data Pipelines, and are able to do Data Modelling. In a way, Data engineers should possess all the skills that Software engineers have and as well as skills Data Analysts to possess. And, in interviews also the companies look for all the skills mentioned above in Data Engineers. Checkout the 5 Key skills Data Engineer need in 2023 So in this blog post, I am going to cover all the topics and domains one can expect in Data Engineer Interviews A. Programming Round Most of the Produ...