Subscribe cloud computing RSS CSDN home> cloud computing

Using S3cmd AWS simple and quick backup data

Published in16:51 2015-05-25| Time reading| sourceBlogs| ZeroArticle comments| authorClinton David

Abstract:Are still worried about the data backup plan running poor? S3cmd AWS can help you.

A data backup system that has been installed is far from sufficient to prevent loss or damage to personal and working files. Ideally, data should be stored safely and reliably in one or more places away from your home or office. So, even in the USB disk backup (obsolete) document, and in Google Drive sync backup part, and then add a backup mode is not any damage.

There are some problems with this kind of storage. For example, some people will complain that "the setting is too complicated", some people say, "before trying, but I sometimes forget the actual update regularly."

Do you have a AWS Amazon account? Use the command line of the operating system is very handy? So, I come to introduce a very simple, very cheap, very reliable DIY data backup plan, just set once, no follow-up links. (though you need to check it regularly). Its price is also very cheap. Only $0.03 per month per GB.

Download and install S3cmd

First, the need to install S3cmd on the system. This process through the Linux on a thorough test (not causing any damage), in the Mac is the same. So I believe, for Windows, AWS CLI is also compatible.

First, install Python and wget:

Apt-get install Python python-setuptools WGet sudo

Then, download the S3cmd package with WGet (1.5.2 is currently the latest version):

Run tar, extract the document:

Xzvf s3cmd-1.5.2.tar.gz tar
Enter the newly created S3cmd directory:

S3cmd-1.5.2 CD

And run the setup program:

Python install sudo
You can now configure S3cmd:
--configure s3cmd

When you visit S3, you need to fill in the AWS account access ID and key, as well as some other authentication, encryption and account information. Configure the S3 connection test will save your settings, and then, all the preparation work is completed.

Create data backup:

Since there is no sense in data backup without data, you need to determine which folders and files need to be backed up. You also need to create a new bucket from the AWS console:

Now, suppose you put all the important data are in the workfiles directory, and bucket named mybackupbucket8387. The backup command will be the following:

Sync /home/yourname/workfiles/ s3://mybackupbucket8387/ --delete-removed s3cmd
Incidentally, the source address and the destination address last slash is also very important.

Let's check the following command:

Sync notification tool to save the file to the source location and target location. This means that the update will first check the contents of the two directories, adding a copy of any file in a directory, rather than another. Two address simply defines what the two position data will be synchronized, - delete-removed command tool to remove the S3 All files on the bucket are no longer stored locally.

Depending on the size of the data backup, the first run may take some time.

Sometimes, you may not want to use - delete-removed, perhaps you would like to keep the old version of the cover file file. At this time, only need to delete from the command line - delete-removed parameters, and in the S3 Bucket enabled version on it.

If you want to further reduce the cost, but also need to embedding of archive has long coverage of documents automatically delete command, you can use AWS console for bucket create a lifecycle rule, the old version or file, say more than 30 days, transferred to the glacier , Glacier storage costs only $per GB0.01 per month.

So this is a simple and inexpensive data backup. But it has not been in the "set up to be completely forgotten" stage, there is still a very simple step (at least in the Ubuntu system is the case): to create a Cron job.

If you want to synchronize a file per hour, you can create a text file that contains only the following two lines:

# /bin/bash!
Sync /home/yourname/workfiles/ s3://mybackupbucket8387/ s3cmd

And then use the sudo, the file is saved in the /etc/cron.hourly/ directory directory.

If the file is named "mybackup", the following command can make the file run:

Chmod +x mybackup sudo

Original address:Http://

Activity recommendation:May 26th CSDN online training - AWS cloud computing environment in the machine learning

(translation / Li Yili / commissioning editor Wang Xinhe)

AWS Chinese technology community micro channel public number, real-time master AWS technology and product information!

AWS Chinese technology community for the majority of developers to provide a Web Service Amazon technology exchange platform, push AWS latest information, technology, video, technical documents, wonderful technology and other related exciting content, more AWS community experts to communicate directly with you! Quickly join the AWS Chinese technology community, faster and better understanding of AWS cloud computing technology.

June 3-5, Beijing National Convention center,The seventh China Cloud Computing Conference, 3 Catholic, 17 sub forum, 3 field training, 160+ lecturer,Full disclosure of issues!

step on