Safely backing up EC2 to S3 with Boto and IAM

As a fan of Bruce Schneier, I like things that, as he would say, ‘fail well.’ So when I set out to back up my EC2 instance to S3, I wanted to find a way in which my instance could be compromised (somewhat likely), but my backups would be safe from malicious deletion or modification (disastrous).

AWS has a service called Identity & Access Management (IAM), which brings Role Based Access Control (RBAC) to the AWS APIs. You can make IAM users, groups, and roles. Since I didn’t want my personal access and secret keys on my EC2 instance (that wouldn’t fail well!) I decided to make a new user, called jason-readonly.

My plan was to create a read-only user that could drop backups from EC2 into my S3 bucket, named jason-backups, but in such a way that anybody who had jason-readonly‘s key pair would be unable to delete or modify those backups.

From the IAM console, I crated a my jason-readonly user and added a User Policy that consisted of the following:

{
  "Version": "2012-12-26",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [ "s3:PutObject", "s3:ListBucket" ],
      "Resource": [ "arn:aws:s3:::jason-backups",
      "arn:aws:s3:::jason-backups/*" ]
    }
  ]
}

 

With that finished, I created a few scripts on my EC2 instance to handle the backing-up. First, a bash script to tar up my interesting directories and capture any MySQL content:

#!/bin/bash

DATE=`date '+%Y%m%d%H%M%S'`
BACKUP_DIR=/root/backups

SQL_FILE="${BACKUP_DIR}/mysql-${DATE}.sql"
TAR_FILE="${BACKUP_DIR}/jason-backup-${DATE}.tar.gz"

BACKUP_TARGETS=''
BACKUP_TARGETS="${BACKUP_TARGETS} /opt/foo/bar"
BACKUP_TARGETS="${BACKUP_TARGETS} /var/www/html"
BACKUP_TARGETS="${BACKUP_TARGETS} /etc/httpd"
BACKUP_TARGETS="${BACKUP_TARGETS} ${SQL_FILE}"

mysqldump --no-create-info --complete-insert --extended-insert=FALSE --compact --user='username' --password='changeme' redmine >> ${SQL_FILE}
tar czf ${TAR_FILE} ${BACKUP_TARGETS} >/dev/null 2>&1

 

Next, I wrote a Boto script to push the tarball into S3:

from boto.s3.connection import S3Connection
from boto.s3.key import Key
from datetime import datetime
import sys

tarball = open(sys.argv[1])
file_fqn_list = sys.argv[1].split('/')
date_string = datetime.now().strftime('%Y-%m-%d-%H:%M:%S')
key_name = date_string + '-' + file_fqn_list[3]

try:
    s3conn = S3Connection('my-access-key', 'my-secret-key')
    bucket = s3conn.get_bucket('jason-backups')
    key = Key(bucket, key_name)
    key.set_contents_from_file(tarball)
except Exception, e:
    print 'Exception caught:'
    print e
    sys.exit(1)

 

Note that I’m prepending a date/time stamp on the S3 object name. This will let me push the same file multiple times. I like this, but if you don’t it’s easy to remove.

Now we just have to automate it with cron. So into /etc/cron.daily I dropped a file called backup with 755 permissions. In it:

#!/bin/bash
sh /root/tarup.sh
python /root/backup.py `ls -1rt /root/backups/jason* | tail -1`
mkdir -p /root/backups/save
mv `ls -1rt /root/backups/jc.com* | tail -5 | tr 'n' ' '` /root/backups/save
rm -f /root/backups/*.tar.gz /root/backups/*.sql
mv /root/backups/save/* /root/backups/
rmdir /root/backups/save

 

There are lots of ways to do this. You could use deja, which is far more powerful, and you could be much smarter about keeping only the latest 5 backup files. This works for me, but I’d love to hear how you do it. Hit me up in the comments!

One thought on “Safely backing up EC2 to S3 with Boto and IAM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s