Duplicity is a python command line tool for encrypted bandwidth-efficient backup.
In their creator words: “Duplicity incrementally backs up files and directory by encrypting tar-format volumes with GnuPG and uploading them to a remote (or local) file server. Currently local, ftp, sftp/scp, rsync, WebDAV, WebDAVs, Google Docs, HSi and Amazon S3 backends are available. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Currently duplicity supports deleted files, full Unix permissions, directories, symbolic links, fifos, etc., but not hard links.”
My brief description: a free and open source tool for doing full and incremental backup and restore from linux to local or almost any remote target, compressed and encrypted. A charm for any sys admin.
In order to explain how Duplicity works for backup and restore. I’m going to show how to do a backup of a folder called “sample_data” to an Amazon S3 bucket called “alfresco-backup” and a folder called “test” inside my bucket (use your own bucket name) the bucket and folder has been created by me before running any command but could be created by duplicity first time we run the command. If you want to let Duplicity create your own Amazon S3 bucket and you are located in Europe, please read the Duplicity man page.
Note: please not get confused with my bucket name “alfresco-backup”, use your own bucket name. I will use this bucket name also in future articles 😉
How to install Duplicity in Ubuntu:
[bash]
# sudo apt-get install duplicity
[/bash]
Create a gpg key and remember the passphrase because will be required by Duplicity, defaults values works good. Your backup will be encrypted with the passphrase, all files created by command below will be on your Linux home/.gnupg but you won’t need that at all:
[bash]
# gpg –gen-key
[/bash]
Create required system variables (you can also use them with an script):
[bash]
# export PASSPHRASE=yoursupersecretpassphrase
# export AWS_ACCESS_KEY_ID=XXXXXXXXXXX
# export AWS_SECRET_ACCESS_KEY=XXXXXXXXXX
[/bash]
Backup:
To perform a backup with the Duplicity command (the easy and simple command):
[bash]
# duplicity sample-data/ s3+http://alfresco-backup/test
[/bash]
If you get errors, some dependencies for Python and S3 support are required, try installing librsync1 and next python libraries python-gobject-2, boto and dbus.
The command output should be something like this:
[bash]
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
No signatures found, switching to full backup.
————–[ Backup Statistics ]————–
StartTime 1368207483.83 (Fri May 10 19:38:03 2013)
EndTime 1368207483.86 (Fri May 10 19:38:03 2013)
ElapsedTime 0.02 (0.02 seconds)
SourceFiles 5
SourceFileSize 1915485 (1.83 MB)
NewFiles 5
NewFileSize 1915485 (1.83 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 5
RawDeltaSize 1907293 (1.82 MB)
TotalDestinationSizeChange 5543 (5.41 KB)
Errors 0
————————————————-
[/bash]
This will create 3 files in your S3 bucket:
- duplicity-full-signatures.20130510T160711Z.sigtar.gpg
- duplicity-full.20130510T160711Z.manifest.gpg
- duplicity-full.20130510T160711Z.vol1.difftar.gpg
All files are stored with the GNU tar format and encrypted, “duplicity-full” means that was first backup, in next backups you will see “duplicity-inc” in different volumes.
- sigtar.gpg file contains files signatures then Duplicity will know what file has changed and do the incremental backup
- manifest.gpg contains all files backed up and a SHA1 hash of each one
- volume files (vol1 to volN depending of your backup size) will contains data files, a volume file use to be up to 25MB each one, this is for improve performance doing backup and restoration.
For more information about file format look at here: http://duplicity.nongnu.org/duplicity.1.html#sect19
[bash]
# duplicity –full-if-older-than 30D sample-data s3+http://alfresco-backup/test
[/bash]
Verify if there are changes between last backup and your local files:
[bash]
# duplicity verify s3+http://alfresco-backup/test sample-data
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Fri May 10 19:38:03 2013
Difference found: File . has mtime Fri May 10 19:39:05 2013, expected Fri May 10 19:34:53 2013
Difference found: File file1.txt has mtime Fri May 10 19:39:05 2013, expected Fri May 10 18:25:36 2013
Verify complete: 5 files compared, 2 differences found.
[/bash]
In last example we can see that a fine called file1.txt has changed and also the root directory “.” date,
List files backed up in S3:
[bash]
# duplicity list-current-files s3+http://alfresco-backup/test
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Fri May 10 18:32:59 2013
Fri May 10 19:34:53 2013 .
Fri May 10 18:25:36 2013 file1.txt
Fri May 10 18:54:31 2013 file2.txt
Fri May 10 19:35:03 2013 mydir
Fri May 10 19:35:03 2013 mydir/file3.txt
[/bash]
You can see 3 files and 2 directories, in the statistics report duplicity counts any directory as file.
Restore:
Duplicity can also manage the restore process but it will never override any existing file, the you can restore to a different location or remove your corrupted or old data if you want to restore in the original place. If duplicity successfully completes the restore it is not going to show any output.
How to restore last full backup:
[bash]
# duplicity s3+http://alfresco-backup/test restore-dir/
[/bash]
How to restore a single file:
[bash]
# duplicity –file-to-restore mydir/file3.txt s3+http://alfresco-backup/test restore-dir/file3.txt
[/bash]
How to restore entire backup in a given date:
[bash]
# duplicity -t 2D s3+http://alfresco-backup/test restore-dir/
[/bash]
this will restore full backup of 2 days ago (see -t options, seconds, minutes, hours, months, etc may be used)
How to restore a single file in a given date:
If you are looking for a file with a content but you don’t know what version of the file you have to recover, you can try restoring different file versions in the backup:
[bash]
# duplicity -t 2D –file-to-restore file1.txt s3+http://alfresco-backup/test file1.txt.2D
# duplicity -t 30D –file-to-restore file1.txt s3+http://alfresco-backup/test file1.txt.30D
[/bash]
Note, you have to specify a different file name for local restoration, remember that duplicity never overrides existing content.
Delete older backups:
[bash]
# duplicity remove-older-than 1Y s3+http://alfresco-backup/test –force
[/bash]
also you can use for example 6M (six months), 30D (30 days) or 60m (60 minutes).
To see more information when you are running a duplicity command can use the vervosity flag -v [1-9] but also can see all logs here /root/.cache/duplicity/[directory with unique ID]/duplicity-full.YYYMMDDT182930Z.manifest.part
When you are finished playing with Duplicity and Amazon S3 remember to clean your passphrase and Amazon keys from the variables:
[bash]
# unset PASSPHRASE
# unset AWS_ACCESS_KEY_ID
# unset AWS_SECRET_ACCESS_KEY
[/bash]
In next posts I will show how to use Duplicity to have a perfect backup and restore policy of Alfresco.
One thought to “Playing with Duplicity backup and restore tool and Amazon S3”