How to set up smartphones and PCs. Informational portal
  • home
  • Windows 8
  • Backup using scripts. MySQL Backup Script Example

Backup using scripts. MySQL Backup Script Example

Foreword

The idea of ​​​​creating a script that can backup a site in a few clicks (all files + database dump) appeared at a time when I was working on several projects, and I was developing and testing on a local web server. Even then, I had Denver, which over time changed to a more rapidly developing local combine from web services - OpenServer.

The projects were visited, so first a local copy of the entire site was made, and after that it was possible to work and experiment with it, avoiding interruptions in the work of a live project. I made a backup copy like this: I logged in via SSH and packed the project folder with the 7Zip or TGZ archiver, made a dump using mysqldump and downloaded all this stuff via the web or SCP / WinSCP.

Immediately, laziness made itself felt and the idea of ​​automation arose: "it would be cool if everything that I drive into the console was done automatically for each site, and only a few clicks were required from me." The first search for a solution to automate SSH under Windows immediately led me to a utility that I have been using for a long time to work with files on servers - .

How the backup script works

So, the essence of the automated backup script is as follows:

  1. We read data from the ini-file, where we store the settings for the site (name, login, password, mysql account ...);
  2. Based on the data, we generate a script for WinSCP, which will do all the necessary operations in the console on the server;
  3. We launch the WinSCP program, which logs into the server and makes a backup copy of the site + database, and also archives all data using a high compression archiver - 7Zip or TAR + GZip;
  4. Downloading the archive;
  5. Close WinSCP and clean logs + delete temporarily generated script for WinSCP with commands;
  6. Ready! Need a copy of another site? - GOTO 1!

There may be more than one site, but several, so the use of many ini-files with separate settings for each site is provided.

All operations for processing and managing WinSCP will be performed by a script. written in scripting language bat-files under Windows. With it, a simple console interface is also implemented, which allows you to perform all the necessary operations.

For this task, based on its simplicity, the scripting language of bat-files is quite enough, but still no one bothers to implement this system in Autoit or in one of the high-level programming languages ​​such as C#, Delphi, etc.

Attention! The script is relevant only if you have access to the server console via SSH.

Backup interface and menu

After running the script, a simple menu is available to us:

The first item (press 1 and enter) is responsible for the backup process and displays a list of available configuration template files for sites.

The second point is generating a configuration file template and opening it for editing.

The third paragraph is a brief reference for writing template files.

In the figure below, a menu is opened with a choice of sites available for reservation:

Here the sub-item "0. All sites listed below" is responsible for starting the backup of all sites with the following configuration files. So you can make a backup copy for one site or for all at once.

An example of a configuration file for a script

Here is an example somesite.com.ini configuration file for backing up somesite.com:

Sitename_string=somesite_com store_path=D:\Backup\ archive_method=7z [email protected] hostname=000.111.222.333 ssh_user=root [email protected] mysql_user=site1_user mysql_password=PZBkOyjMmxWgQHhd185g mysql_db_name=site1_db dirs_to_backup=/var/www/somesite.com/www/* /var/www/somesite.com/www/.htaccess

As you can see, everything is simple and nothing more. I'll tell you in detail about each line of settings:

sitename_string - assign a clear site name (only Latin and _ symbol).

store_path - path on the local computer where the backup archive will be loaded.

archive_method - Method for archiving site files.

  • 7z - the archiver of the same name will be used, the output will be an archive with the "zip" extension.
  • gz - archiving with TAR+GZip, the output will be an archive with the "tgz" extension.

archive_password - archive password, only works when archive_method=7z .

hostname - external IP address or domain of the server hosting the site and database.

ssh_user - username to connect via SSH . In most cases, you need root to perform all operations, although you can try setting it up for another non-privileged user.

ssh_password!! - password for the above user.

mysql_user * - username to connect to the site database on the MySQL server.

mysql_password is the password for the above MySQL user. Must be installed for sure!

mysql_db_name - the name of the site database for which we will do a full dump.

dirs_to_backup ** - write a space-separated list of full paths, directories and files to be archived.

Notes:

If you leave the value blank, then the password will be requested later in interactive mode (I recommend not storing the password).

* It is assumed that the scripts and the MySQL database are hosted on the same server. That is, the address of the MySQL server is 127.0.0.1 (localhost).

** By default, archivers do not pack .htaccess files, so you need to additionally specify the full path to them!

** Be sure to specify full, not relative paths to folders or files.

You can create configuration files as much as you like, when you run the script, the name of each will be displayed with the assigned number, to get started you need to enter the number (number) of the configuration and that's it (if you didn't specify a password, then enter another password).

Preparing for work and using the script

Everything is very simple!

The first step is to decide where we will store the script with all its configurations. I recommend not to store such programs and data on the local machine in an open form, because it is not safe, especially if passwords for root access to servers are stored in the configurations. The complex can be stored somewhere on an encrypted medium or where viruses or unauthorized people will not gain access. In terms of security - the choice is yours, let's move on.

Let the whole complex be located along the path:

  • D:\Backup\ - archives with backups will be loaded here.
  • D:\Backup\script\ - our script and WinSCP program will be located here.

1) Download the latest customized and portable version of WinSCP from the PortableApps website - WinSCP Portable. Install the program in the D:\Backup\script\ folder - the WinSCPPortable folder will be created there, and the WinSCPPortable.exe file will be in it.

2) Download the archive with the script and templates here - backup_sites_v1.2 and unpack it into the folder D:\Backup\script\ .

3) Open the backup_sites_v1.2.bat file for editing and edit the path in the "WinSCP configuration" section:

  • "D:\Backup\script\WinSCPPortable\WinSCPPortable.exe" - path to WinSCP Portable;
  • "D:\Backup\script\WinSCPPortable\Data\settings\winscp.log" - the path where the WinSCP Portable log file is created.

We make sure that the paths specified in the script correspond to those on which the WinSCP program is located, otherwise it simply will not start.

4) We create settings template files for backing up sites. There are several of them in the archive - rename them according to the names of the domains of your sites, and delete the extra ones.

You can create a new site template for the script in the following ways:

  1. We just copy the INI file, renaming it under the desired name, and then edit the settings in it;
  2. We generate a template with a script. Run the script, press 2 and enter the domain name or site name (only Latin and the _ symbol). A notepad will open with a newly created file, after making the changes, save it and close it.

Which option is more convenient - choose for yourself.

5) Everything is ready, you can run the script and try to backup one of your sites.

After confirming the selection of a site for backup, the SSH user password may be requested if it was not specified in the configuration file. A window with a log of the WinSCP program will immediately open, which will display the progress of actions on the server, as well as the progress of downloading the finished archive from the server.

When WinSCP finishes downloading the archive, the window will close. To complete the script, just press any key in the window.

About data security

The script in the course of its work generates a temporary file ".tmp" with a set of commands for WinSCP. This file contains a username and password for accessing the server via SSH.

INI templates with site settings also contain important information - these are database access parameters.

I strongly DO NOT recommend storing this script and configuration files on disks that can be directly accessed at any time after launch and simple manipulations with the computer. For storage, you can make an encrypted disk or buy a separate medium.

This script is just a tool that saves time on performing routine and similar actions. Take some time and worry about the safety and security of your data!

All responsibility for the use of this script rests with you, be extremely careful!

Conclusion

The script can be corrected and modified to make it more functional and perform the actions that you need on your sites.

Also, this WinSCP automation method can be used to build other scripts that will perform various tasks on your VPS or Dedicated server through the console.

There are a lot of backup methods, but for me personally they have their drawbacks because of which I do not use them. It is easier to write a few simple scripts, for example, using the so-called bat-nickname or PowerShell. In this article I will tell you how you can set up backup using scripts, I combine them and create bundles.

Backup scheme

In most cases, backup comes down to saving some file. These can be virtual machine images, user files, SQL database backup, 1C:Enterprise infobase upload, etc. It is more correct to store all these backup copies of files in another place, it can be a network folder, an external drive, a tape drive, ftp, etc. For convenience, I use an ftp server.

Let's take a look at how this all happens:

  1. Copy or move them to a folder to send to the archive
  2. Checking the folder for new backups
  3. Sending files to an archive on an FTP server
  4. Delete old backup files

In the first paragraph, we created the files that we need to copy to our FTP server. Now we need to copy them to a folder, which in turn we will send to FTP. To do this, you can use a simple command:

Copy "SOURCE_FOLDER_PATH\* C:\Backup\

By executing this command all our files will be copied to C:\Backup\. It makes sense to execute this command if you collect your backups from different places. Now we need a script that will check the folder for new files and send them to the FTP server. Create an empty backup.ps1 file and write the following script into it.

$a = (Get-Host).UI.RawUI $a.WindowTitle = "(!LANG:Sync Folder To Ftp" $ftp = "ftp://АДРЕС_FTP_СЕРВЕРА/" $localDirectory = "C:\Backup" $user = "ИМЯ_ПОЛЬЗОВАТЕЛЯ" $pass = "ПАРОЛЬ" $webclient = New-Object System.Net.WebClient $webclient.Credentials = New-Object System.Net.NetworkCredential($user,$pass) $Files = Get-ChildItem $localDirectory | Where {$_.LastWriteTime -gt (Get-Date).AddDays(-1)} foreach ($File in $Files) { $LocalFile = $File.FullName Write-Host "Getting $File from $localDirectory" -Foreground "Red" $webclient.UploadFile($ftp + $File, $LocalFile) Write-Host "Puting $File to $ftp" -Foreground "Yellow" } Write-Host "Finished Sync to $ftp" -Foreground "Green" !}

Let's see how this script works. First, variables are set, which indicate the server, username, password, source folder. Then in the line $Files = Get-ChildItem $localDirectory | Where ($_.LastWriteTime -gt (Get-Date).AddDays(-1)) selects all files whose modification date is greater (-qt) than the current date minus 1 day. Here you can adjust according to your schedule. I make backups every day. Then, in a loop, we go through each file that satisfies the conditions and send it to the FTP server.

In order not to take up disk space, I delete backups older than 30 days. In principle, after sending them to FTP, they can be deleted immediately, I leave them only so that if they are suddenly needed, I would not have to spend time downloading them from FTP, and an extra copy of backups will not hurt, you never know what can will happen with FTP, so if space permits, I recommend keeping them on the source server as well.

To clean up the folder, I use the removeOldBackups.ps1 script

$fullTargetPath = "C:\Backup" $deleteFiles = Get-Childitem $fullTargetPath -Recurse | Where ($_.LastWriteTime -lt (Get-Date).AddDays(-30)) | Foreach ( Remove-Item $_.FullName -Force -Recurse)

An extremely simple script, I will explain only one line Where ($_.LastWriteTime -lt (Get-Date).AddDays(-30)) | this line compares the file modification date with the current date minus 30 days, this comparison will include files whose LastWriteTime is less than the current date minus 30 days. If necessary, you can adjust to suit your needs.

Backup of important information - every system administrator faces such a task. The task would seem trivial and will not arouse interest among many readers. But, for example, such an article would have helped me a lot at some point, so I think that this article should be.

Task: Data backup to a local directory and to a separate server, using a minimum of third-party software, logging and notifying the administrator in jabber in case of failures. All the basic functions of most automatic backup software, but without installing it, and therefore without its bugs (which, in fact, led to such an idea).

And now to business.

First, let's create and open the script
nano backup-script
Now in the script add the line
#!/bin/bash
Let's declare some variables.
TN - TASKNAME - task name. Used to output to the log and determine the file name.
Since there are several tasks (monthly, weekly, daily) and it was too lazy to write a script for each case, I created a universal one in which you just need to uncomment the necessary lines. The name of the tasks must be written without spaces, preferably in Latin, if you do not want problems with encoding and incorrect command parameters.
TN=docs-monthly
#TN=docs-weekly
#TN=docs-daily
OF - Output File - the name of the output file. Obtained from the variable TN, that is, the name of the job.
OF=$TN.tar.gz
We declare a variable with the path to the log file, and then we will output all error messages and the rest to the log.
LOGFILE=/var/log/backup.log
Let's make an entry in the log about the beginning of the backup (date, time, task name)
echo >>$LOGFILE echo "========================================= ==========" >>$LOGFILE echo "$(date +"%d-%b-%Y %R")" >>$LOGFILE echo "Job \"$TN\" started. .." >>$LOGFILE
There is a problem that if you specify directory names with spaces in the command parameters (eg tar), the script fails. The solution was found on the Internet - the linux operating system uses a space as a standard command parameter separator. Let's redefine the standard separator (stored in the $IFS variable) other than a space, for example \n - a line break character.
Remember the old value of the standard separator
OLD_IFS=$IFS
We replace the standard separator with our own
IFS=$"\n"
SRCD - SouRCe Directory - directory with data for backup
Now you can list several directories, the separator will be a line break, as we ourselves indicated in the line above
SRCD="/mnt/source/folder_1 /mnt/source/folder_2 /mnt/source/folder_N"
TGTD - TarGeT Directory - directory where backups will be stored
TGTD="/var/backups/"
Naturally, we understand that storing important backups only at the source is at least frivolous. Therefore, we will leave a copy on the remote resource, which we will separately mount using mount and fstab. I’ll explain right away why I used mount and fstab, and not one mount - I mount this directory in my other scripts, and as one of my familiar programmers said, a good programmer will not write the same code twice (something like this, literally I don't remember, but I hope I get the point.
TGTD2="/mnt/archive/"
The archiving process itself in the option "Create a new archive"
tar -czf $TGTD$OF $SRCD &>>$LOGFILE
and in the option "Update files in the old archive"
tar -u -f $TGTD$OF $SRCD &>>$LOGFILE
In the second case, it is better to use a specific file name instead of $OF, because, for example, I have a weekly archive updated daily, and their $TN (task names) do not match, respectively, and $OF.

In the variable "?" The execution status of the last command is stored. Let's save it to use later.
STATUS=$?
We return the standard separator to the original value
IFS=$OLD_IFS
Now let's add a condition - if the process of packing into a tar archive ended with an error, send a message to the admin, delete the failed backup file. Otherwise, we continue further - we mount the network ball and throw a copy of the archive into it. After each operation, we check the result of the execution, write logs, and either continue or notify the admin and interrupt the procedure.
if [[ $STATUS != 0 ]]; then rm $TGTD$OF &>>$LOGFILE echo "#################################### ######" >>$LOGFILE echo "### An error occurred! Backup failed. ###" >>$LOGFILE echo "################# ##########################" >>$LOGFILE echo "$(date +"%d-%b-%Y %R%nFile ") $OF backup not created" | sendxmpp -t -f /usr/local/etc/XMPP_settings recipient_login@domain &>>$LOGFILE else echo "Backup file saved as \"$TGTD$OF\"" >>$LOGFILE echo "Backup completed successfully at $(date +"%R %d-%b-%Y")!" >>$LOGFILE echo "Mount file system for $TGTD_archive backup archive" >>$LOGFILE mount $TGTD2 &>>$LOGFILE if [[ $? != 0 ]]; then echo "################################################ ##############" >>$LOGFILE echo "### An error occurred while mounting the fallback resource ###" >>$LOGFILE echo "########## ################################################### #" >>$LOGFILE echo "$(date +"%d-%b-%Y %R%nFile") $OF backup not copied to backup" | sendxmpp -t -f /usr/local/etc/XMPP_settings recipient_login@domain &>>$LOGFILE exit fi echo "Started copying file to backup" >>$LOGFILE cp -f $TGTD$OF $TGTD_archive$OF &>> $LOGFILE if [[ $? != 0 ]]; then echo "################################################ ##############" >>$LOGFILE echo "### An error occurred while copying to backup ###" >>$LOGFILE echo "######### ################################################### ##" >>$LOGFILE echo "$(date +"%d-%b-%Y %R%nFile") $OF backup was not copied to the backup resource" | sendxmpp -t -f /usr/local/etc/XMPP_settings recipient_login@domain &>>$LOGFILE else echo "File copy completed successfully at $(date +"%R %d-%b-%Y")!" >>$LOGFILE echo "File copied as \"$TGTD_archive$OF\"" >>$LOGFILE fi echo "Unmount file system for backup archive $TGTD_archive" >>$LOGFILE umount $TGTD2 &>>$LOGFILE echo "All operations completed successfully!" >>$LOGFILE fi exit

In the process, we copy the archive from the local storage to the remote one. Naturally, we check that each operation is successfully completed, and write everything to the logs.
To send a message to the administrator, I use an XMPP message, since the organization has a Jabber server, and I like to get a quick failure message rather than go into the mail, typing in passwords, poking at links, and waiting for the browser to display everything to me. In any case, no one is stopping you from using sendmail instead of sendxmpp.
The /usr/local/etc/XMPP_settings file has the following content:

#sender_login@domain;jabber_server_ip:jabber_server_port sender_password [email protected];127.0.0.1:5222 password
In the fstab file, a line describing the connection of Windows balls
//192.168.0.250/arhiv /mnt/archive cifs noauto,rw,iocharset=utf8,cp866,file_mod=0666,dir_mod=0777,noexec,_netdev,credentials=/root/.passwd_to_archive_directory 0 0
Now it remains only to add the task to cron. This can be done using the /etc/crontab file, but I, due to the habit of GUI, inherited from Windows, use web interfaces for such cases. The command must be run as root, that is, for example, sudo bash backup_script. By adding a command to cron, you can determine that it will immediately be executed on behalf of root

During the discussions, the problem of log growth was raised. I went along the simplest (in my opinion) path: we will store only the last N lines of the log, for example 300. Two lines will be added to the script, in which we will save the last 300 lines of the log to a temporary file, then we will wipe the log with it
tail -n 300 $LOGFILE >/tmp/unique_fantastic_filename.tmp mv -f /tmp/unique_fantastic_filename.tmp $LOGFILE
Here is the full text of the script:
#!/bin/bash TN=docs-monthly #TN=docs-weekly #TN=docs-daily OF=$TN.tar.gz LOGFILE=/var/log/backup.log echo >>$LOGFILE echo "== ================================================= =" >>$LOGFILE echo "$(date +"%d-%b-%Y %R")" >>$LOGFILE echo "Job \"$TN\" started..." >>$LOGFILE OLD_IFS= $IFS IFS=$"\n" SRCD="/mnt/source/folder_1 /mnt/source/folder_2 /mnt/source/folder_N" TGTD="/var/backups/" TGTD2="/mnt/archive/" tar -czf $TGTD$OF $SRCD &>>$LOGFILE #tar -u -f $TGTD$OF $SRCD &>>$LOGFILE STATUS=$? IFS=$OLD_IFS if [[ $STATUS != 0 ]]; then rm $TGTD$OF &>>$LOGFILE echo "#################################### ######" >>$LOGFILE echo "### An error occurred! Backup failed. ###" >>$LOGFILE echo "################# ##########################" >>$LOGFILE echo "$(date +"%d-%b-%Y %R%nFile ") $OF backup not created" | sendxmpp -t -f /usr/local/etc/XMPP_settings recipient_login@domain &>>$LOGFILE else echo "Backup file saved as \"$TGTD$OF\"" >>$LOGFILE echo "Backup completed successfully at $(date +"%R %d-%b-%Y")!" >>$LOGFILE echo "Mount file system for $TGTD_archive backup archive" >>$LOGFILE mount $TGTD2 &>>$LOGFILE if [[ $? != 0 ]]; then echo "################################################ ##############" >>$LOGFILE echo "### An error occurred while mounting the fallback resource ###" >>$LOGFILE echo "########## ################################################### #" >>$LOGFILE echo "$(date +"%d-%b-%Y %R%nFile") $OF backup not copied to backup" | sendxmpp -t -f /usr/local/etc/XMPP_settings recipient_login@domain &>>$LOGFILE exit fi echo "Started copying file to backup" >>$LOGFILE cp -f $TGTD$OF $TGTD_archive$OF &>> $LOGFILE if [[ $? != 0 ]]; then echo "################################################ ##############" >>$LOGFILE echo "### An error occurred while copying to backup ###" >>$LOGFILE echo "######### ################################################### ##" >>$LOGFILE echo "$(date +"%d-%b-%Y %R%nFile") $OF backup was not copied to the backup resource" | sendxmpp -t -f /usr/local/etc/XMPP_settings recipient_login@domain &>>$LOGFILE else echo "File copy completed successfully at $(date +"%R %d-%b-%Y")!" >>$LOGFILE echo "File copied as \"$TGTD_archive$OF\"" >>$LOGFILE fi echo "Unmount file system for backup archive $TGTD_archive" >>$LOGFILE umount $TGTD2 &>>$LOGFILE echo "All operations completed successfully!" >>$LOGFILE fi tail -n 300 $LOGFILE >/tmp/unique_fantastic_filename.tmp mv -f /tmp/unique_fantastic_filename.tmp $LOGFILE exit

Thank you all for your attention!

And so, a brief extract from the theoretical knowledge that we need:

You can back up to a network share or to a separate volume. Copying to a network share has a couple of significant disadvantages: firstly, if there are problems with the network (during archiving), your backup will obviously not be created, and secondly, a network folder can contain only one single backup copy ( backing up to the same network folder as before destroys the previous backup).

Copying to a separate volume is done as follows: A WindowsImageBackup\ folder is created on the target volume<computer name>\. In this folder, in turn, virtual disks will be created (one for each of the volumes being backed up), to which the backup will be performed. After copying is completed, the state of the virtual disks that store the backups will be saved using the shadow copy service. During the next archiving, the same actions will be performed, as a result, it will turn out that each specific archive will be available when accessing a specific shadow copy. Moreover, from the point of view of the archiving program, each such archive will be a full archive, and from the point of view of the space used, it will be incremental (a shadow copy stores information only about changed data blocks).

Information about the completed archiving process is stored in several places in the OS, which, as it happens, may contain inconsistent information. It is clear that the actual number of backups stored locally on the computer cannot be greater than the number of shadow copies of the volume that was backed up. You can view information about the number of shadow copies, for example, on the command line using the command diskshadow(and its subcommands list shadows all). However, shadow copies do not contain enough information to create a backup list, so this information is taken from other places. So, for example, the OS keeps a record of backup copies in the global archive directory, as well as in the Windows Server Backup Log event log. Information from these sources is displayed in the Windows Server Backup snap-in. As a result, such a situation may arise that the equipment will show us conflicting information that has nothing to do with reality.

Look at the screenshot. It was made on a system with only two backups stored on the local disk (there were only two shadow copies) and no network backups were created. However, the snap-in tells us in the “All archives” section that we allegedly have 6 archives, and in the message window we see a report on the creation of only 3 archives. In order to force the OS to display consistent information that corresponds to reality, we will have to initialize all the components of the archiving system that store information about backups or the backups themselves. To do this, we will need to clear windows backup log , delete the global directory of archives (using the command wbadmin delete catalog) and delete all shadow copies (using the command diskshadow delete shadows all). By and large, the information stored in the windows backup log is purely informational in nature and does not affect the process of restoring information from the archive, if it needs to be done, which cannot be said about the information stored in the global catalog. If the global catalog is damaged, then we will not be able to restore information using the standard windows archiving tools. However, a corrupted or deleted global archive directory can be restored from a backup that is created each time a backup is made in the WindowsImageBackup\ folder.<computer name>\. To restore a corrupted global archive catalog, you must first delete it (using the command wbadmin delete catalog), and then restore from a backup (using the command wbadmin restore catalog).

Well, now, in fact, I will publish the backup script:

Write-Verbose "Started..." #Save the value of the environment variable $VerbosePreference $tmpVerbpref=$VerbosePreference $VerbosePreference="Continue" #Path to the network folder where we will copy the archive $NetworkBackupPath="\\SRV66\Backup$\SRV02 \BMR" #Name of the partition on which we will create the archive $VolumeTarget="D:" # Number of backup copies to be stored on local media $BackupQuantity=3 # Number of backup copies to be stored on network media $NetBackupQuantity=5 # Path to the file-list of backups $csvFile="D:\Backup\ProfileBackup.csv" #Path to the folder in which we will create the 7zip archive $Path2Arc="D:\Backup" # connect the Server Backup Add-PSSnapin Windows.Serverbackup snap-in -ErrorAction SilentlyContinue # create backup task $policy = New-WBPolicy<# # создаём и добавляем в задание бэкапа о бэкапируемых файлах $source = New-WBFileSpec -FileSpec "C:\Users" Add-WBFileSpec -Policy $policy -FileSpec $source #># #Get list of critical volumes $VolSources = Get-WBVolume -CriticalVolumes #Add volumes to be backed up Add-WBVolume -Policy $policy -Volume $VolSources #Define VSS Backup Otions Set-WBVssBackupOptions -policy $policy -VssCopyBackup #Enable SystemState backup Add-WBSystemState -policy $policy #Enable Bare Metal Recovery Add-WBBareMetalRecovery -Policy $policy # # specify the local volume to which the archive will be copied $target = New-WBBackupTarget -VolumePath $VolumeTarget Add-WBBackupTarget -Policy $policy -Target $target Write-Verbose "Begin the process of creating a backup" # execute the backup Start-WBBackup -Policy $policy # check the return code with the result of the backup if ((Get-WBSummary).LastBackupResultHR -eq 0) ( # rename the archive to a more friendly name $newname = "_Backup_$(Get-Date -f yyyyMMddHHmm)" Write-Verbose "Rename the folder with the newly created archive to $newname ..." Ren $VolumeTarget\WindowsImageBackup -NewName $newname # For compress archive using 7zip $arc="C:\Program Files\7-Zip\7z.exe" $arc_params="a -t7z -m0=LZMA2 -mmt -mx9" $arc_source="$VolumeTarget\$newname" $ arc_dest="$Path2Arc\$newname.7z" Write-Verbose "Compress $newname folder with 7zip to $newname.7z" Start-Process $arc -ArgumentList "$arc_params $arc_dest $arc_source" -Wait # copy archive to network folder #copy $VolumeTarget\$newname $NetworkBackupPath -Recurse Write-Verbose "Copying the $arc_dest file to a network folder..." copy "$arc_dest" $NetworkBackupPath if ($?) archive and the folder that was packed into this archive del "$arc_dest" -Force -Verbose del $VolumeTarget\$newname -Recurse -Force #-Verbose ) # delete old archives from the network folder, except for the latest $BackupQuantity archives $NetBackups= dir $NetworkBackupPath | ?($_.Name -match "_.+(\d)+\.7z$") $NetBackupsCount=$NetBackups.count if (($NetBackupsCount - $NetBackupQuantity) -gt 0) ( $NetBackups | sort lastwritetime | select -First ($NetBackupsCount - $NetBackupQuantity) | del -Force -Verbose #-Recurse -WhatIf ) # read our own backup directory [email protected]() if (Test-Path $csvFile) ($csv = @(Import-Csv $csvFile)) # read last backup data $current = Get-WBBackupSet | select -Last 1 | select VersionID, SnapshotId # and add it to the array of active backup objects $csv += $current # so that there is no confusion, sort the objects again and write back to the CSV file $csv | sort @(Expression=(($_.VersionId)))| select -Last $BackupQuantity | Export-Csv $csvFile -NoTypeInformation # and count how many records there are $count = $csv.count # if there are more records than BackupQuantity, then count how many extra archives to delete. # if there are less than BackupQuantity entries, then nothing needs to be deleted and just add a new entry if ($count -gt $BackupQuantity) ( $old = $count - $BackupQuantity # generate a random name for the script that will be used in diskshadow $file = : :GetRandomFileName() # select all unnecessary archives and pass them through the pipeline to remove $csv | sort @(Expression=(($_.VersionId)))| select -First $old | %( #Read-Host "Press Enter to continue..." | Out-Null #Write-Verbose $file ##Read-Host "Press Enter to continue..." | Out-Null # write the command to a temporary file "delete shadows ID ($($_.SnapshotID ))"|Out-File -FilePath $Env:TEMP\$file -Encoding OEM #gc $Env:TEMP\$file #Read-Host "Press Enter to continue..." | Out-Null # and start diskshadow in script mode diskshadow /s $Env:TEMP\$file | Out-Default ) del $Env:TEMP\$file ) ) else ( # swear the backup didn't complete successfully Write-Verbose "Backup failed" ) Write-Verbose "Script finished working tu" #Restore the value of the environment variable $VerbosePreference $VerbosePreference=$tmpVerbpref

Updated: 07/21/2017 Published: 15.08.2016

This script is written in Unix Shell under CentOS operating system. It will run on most Linux and BSD family systems.

Script example

The script will create its own dump for each database. This is necessary for fast data recovery.

  1. #!/bin/bash
  2. PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
  3. destination="/backup/mysql"
  4. userDB="backup"
  5. passwordDB="backup"
  6. fdate=`date +%Y-%m-%d`
  7. find $destination -type d \(-name "*-1[^5]" -o -name "*-?" \) -ctime +30 -exec rm -R () \; 2>&1
  8. find $destination -type d -name "*-*" -ctime +180 -exec rm -R () \; 2>&1
  9. mkdir $destination/$fdate 2>&1
  10. for dbname in `echo show databases | mysql -u$userDB -p$passwordDB | grep -v Database`; do
  11. case $dbname in
  12. information_schema)
  13. continue ;;
  14. mysql)
  15. continue ;;
  16. performance_schema)
  17. continue ;;
  18. test)
  19. continue ;;
  20. *) mysqldump --databases --skip-comments -u$userDB -p$passwordDB $dbname | gzip > $destination/$fdate/$dbname.sql.gz ;;
  21. done;

Description of the script

1 Specify the path to the interpreter.
2 We set system variables so that we do not have to write full paths to executable files in the script.
4 - 7 We set variables.
4 The directory where we will save the backups.
5 The account to connect to the database.
6 Password to connect to the database.
7 Date when the script is run.
9 We find all backups that are older than 30 days and delete them. We leave files for the archive on the 15th.
10 Delete all backups older than 180 days.
11 Create a directory in which we will save backups. As the name of the directory, we use the date the script was run in the YYYY-MM-DD format.
13 - 25 We connect to the database and pull out a list of all databases. We make a backup copy of each.
15 - 22 We skip the service databases information_schema, mysql, performance_schema, test.
23 We make a backup copy for databases.

System preparation

We connect to the database and create an account with the right to create backups:

> GRANT SELECT, SHOW VIEW, RELOAD, REPLICATION CLIENT, EVENT, TRIGGER, LOCK TABLES ON *.* TO [email protected] IDENTIFIED BY "backup";

* in this example, we create an account backup with password backup.

Create a directory where backups will be stored:

mkdir -p /backup/mysql

Saving data on a remote computer

Backups must be created on a remote computer or external drive so that they are available when the server fails. In this example, a shared folder on a remote server is used, which will contain files from backup.

To simplify the process of mounting a network folder, open the following file for editing:

and add the following line to it:

//192.168.0.1/backup /mnt cifs user,rw,noauto,credentials=/root/.smbclient 0 0

* in this example, the shared folder is mounted backup on a server with an IP address 192.168.0.1 to catalog /mnt. Used as a network file system cifs(SMB protocol: samba server or Windows share). Parameters for connection - user: allows any user to mount rw: with read and write permission, auto: do not mount automatically at system startup, credentials: a file containing the username and password for connecting to the shared folder.

Now let's create a file with a username and password:

#vi /root/.smbclient

and bring it to the following form:

username=backup
password=backup

* username: Username, password: password. Of course, in your case, your data is indicated.

Now enter the following command.

Top Related Articles