Physical backup of a MySQL database to Google Cloud

Hello everyone, today we wanted to share with you a way to make a physical backup of a MySQL database on Google Cloud . With this way of making the backup we can make a copy faster than making a logical backup and also be able to safeguard our copy outside our physical site and avoid problems in the event of a disaster.

To make the copy we use the free suite Xtrabackup from Percona, which allows us to make a “hot” copy with hardly any interference in our Linux system. You have the information available on this route:

https://www.percona.com/software/mysql-database/percona-xtrabackup

Once we have it installed, we are going to use two commands to make the copy, the commands are xtrabackup and xbcloud. The first command performs the backup and the second one uploads the output to our Google Cloud unit. The full command is:

xtrabackup --user = root --password = '********' --backup --stream = xbstream --compress --compress-threads = 5 --extra-lsndir = / testbackup --target- dir = / testbackup --parallel = 5 | xbcloud put --storage = google --google-endpoint = 'storage.googleapis.com' --google-access-key = ******* --google-secret-key = ****** - -google-bucket = our bucket --parallel = 5 $ today's date / $ HOSTNAME-full-backup

Physical backup of a database with XtraBackup

We launch the xtrabackup command with the parameter –stream = xbstream that allows us to collect it later in xbcloud and upload it directly. The / testbackup directory is used only as a temporary repository so it does not require a lot of space. We also launched the compressed copy with 5 threads to speed up the copy, but we can adapt it to the hardware of our machine, especially the CPU’s that we want to use.

The next part of the script has the command xbcloud put , which is responsible for uploading our compressed database files to the Google cloud. In this part we put the configuration of our account:

--storage = google --google-endpoint = 'storage.googleapis.com' --google-access-key = ******* --google-secret-key = ****** --google -bucket = ourbucket

In the final part, we already indicate the upload threads again with –parallel = 5 and the path where we are going to leave the files with $ today’s date / $ HOSTNAME-full-backup. The variable $ datetoday simply contains datetoday = $ (date + ” %Y-% m-% d”), the date in the format 2021-06-04 for example and the $ HOSTNAME is taken from the operating system and used to create different folders if we want to carry out this process with several servers.

Once we launch it, we will see how it uploads the files to our folder in the cloud:

 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / gtid_executed.ibd.qp.00000000000000000000, size: 1792
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / gtid_executed.ibd.qp.00000000000000000001, size: 40
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / engine_cost.ibd.qp.00000000000000000000, size: 1870
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / engine_cost.ibd.qp.00000000000000000001, size: 38
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / plugin.ibd.qp.00000000000000000000, size: 1778
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / plugin.ibd.qp.00000000000000000001, size: 33
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / time_zone_transition.ibd.qp.00000000000000000001, size: 47
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / time_zone_transition.ibd.qp.00000000000000000000, size: 1808
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / slave_worker_info.ibd.qp.00000000000000000000, size: 1800
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / slave_master_info.ibd.qp.00000000000000000000, size: 1800
 210604 10:29:48 xbcloud: successfully uploaded chunk: 2021-06-04 / serverprod2-full-backup / mysql / slave_worker_info.ibd.qp.00000000000000000001, size: 44 

The files upload in portions of 10 Mb in which we will not have problems when restoring.

To consider

** Important. We may have problems with the number of open files, since the Xtrabackup tool needs them to perform a consistent backup. We can realize this problem because suddenly the copy is copied in the middle giving an error that it has not been able to upload all the files. In this case, it is best to change the number of open files of the user who performs the backup, in our case root. For this, what we modify is the file /etc/security/limits.conf with the following parameters:

 root soft nproc 65535
 root hard nproc 65535
 root soft nofile 65535
 root hard nofile 65535 

For this example we have used Google Cloud, but the Xtrabackup binary allows us to upload the files also to Amazon S3 and Swift , which gives us the flexibility to even distribute our copies in different cloud systems.

We hope you find the post interesting and in a new entry we will see how to recover the data that we have uploaded on the same server or on a new one to recover any data that we need.

If after reading this article you have not been able to make the backup correctly or you want us to evaluate your case, contact without obligation at our contact page .

If you liked this article, subscribe to our monthly newsletter to be up to date with all our publications with a single email per month.

Still do not know Query Performance ? Find out how it can help you in your Oracle environment. More information on their page LinkedIn .

Follow GPS on LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *