The PTES pentesting standard is awesome and you should read it

If you’re into pentesting or red teaming, sooner or later you’ll encounter some standardized methodologies.

The National Institute of Standards and Technologies (NIST) has one called the “Technical Guide to Information Security Testing and Assessment,” or SP800-115. I’m a big fan of NIST, and this is a good place to start, especially if you care about FISMA risk management frameworks. But it’s pretty high-level, and will probably leave you wanting more.

With a little more Googling, you’ll then find pentest-standard.org. The page has a dated MediaWiki interface. It hasn’t been updated in almost a year. But those things don’t matter, this site is made of open source awesomeness.

The meat of the site lives in the PTES Technical Guidelines. It’s fairly extensive, and if you’re already somewhat familiar with information security, it can go a long way to teaching you about penetration testing.

To give you an idea of the scope of this methodology, take a look at the FreeMind map that they posted, converted here to PNG for your viewing ease.

penetration_testing_execution_standard

Go ahead and click on it, you’ll need to load the whole thing then zoom. It’s enormous.

Every one of these entries in the mindmap are backed up by some direction in the Technical Guidelines. Granted, PTES doesn’t hold your hand in all places, but for the devoted student of pentesting, this is invaluable stuff.

Now, to be fair, PTES is not the only game in town. There are other methodologies worth mentioning; I’ll write more about the later, but here’s an overview.

OWASP is another open source pentesting framework, but it’s focused at the web application layer. 18F, the folks behind cloud.gov and other cool stuff, requires the use of an OWASP automated scanner called ZAP as part of the ATO process.

ISSAF is another cool methodology, but it’s even harder to navigate than PTES. You can download the rar archive, or navigate the individual .doc files. At some point I hope to map PTES and ISSAF steps to one another to identify gaps in the former and contribute back to the project.

As much as I like it, PTES could really use a little TLC. There are incomplete sections. And a more modern interface would help, possibly even a migration to a GitHub Pages model, which would make community contribution easier. A D3 directed graph (example) would make for a nice, interactive mindmap.

But despite its shortcomings, I’d say it’s still the best open source pentesting methodology out there. Go check it out.

Python one-liner: converting JSON to YAML

I’ve been playing with the Titan graph database lately; it’s hella cool, super powerful, and has a great ecosystem. One tool in the Titan toolbox is a REST interface called Rexster.

You can check to see that it’s up and what it’s serving up by curl-ing one of its endpoints.

# curl localhost:8182/graphs/graph
{"version":"2.5.0","name":"graph","graph":"titangraph[cassandrathrift:[127.0.0.1]]","features":{"isWrapper":false,"supportsVertexProperties":true,"supportsMapProperty":true,"supportsUniformListProperty":true,"supportsIndices":false,"ignoresSuppliedIds":true,"supportsFloatProperty":true,"supportsPrimitiveArrayProperty":true,"supportsEdgeIndex":false,"supportsKeyIndices":true,"supportsDoubleProperty":true,"isPersistent":true,"supportsVertexIteration":true,"supportsEdgeProperties":true,"supportsSelfLoops":true,"supportsDuplicateEdges":true,"supportsSerializableObjectProperty":true,"supportsEdgeIteration":true,"supportsVertexIndex":false,"supportsIntegerProperty":true,"supportsBooleanProperty":true,"supportsMixedListProperty":true,"supportsEdgeRetrieval":true,"supportsTransactions":true,"supportsThreadedTransactions":true,"supportsStringProperty":true,"supportsVertexKeyIndex":false,"supportsEdgeKeyIndex":false,"supportsLongProperty":true},"readOnly":false,"type":"com.thinkaurelius.titan.graphdb.database.StandardTitanGraph","queryTime":0.213622,"upTime":"0[d]:00[h]:28[m]:25[s]","extensions":[{"op":"GET","namespace":"tp","name":"gremlin","description":"evaluate an ad-hoc Gremlin script for a graph.","href":"http://localhost:8182/graphs/graph/tp/gremlin","title":"tp:gremlin","parameters":[{"name":"rexster.showTypes","description":"displays the properties of the elements with their native data type (default is false)"},{"name":"language","description":"the gremlin language flavor to use (default is groovy)"},{"name":"params","description":"a map of parameters to bind to the script engine"},{"name":"load","description":"a list of 'stored procedures' to execute prior to the 'script' (if 'script' is not specified then the last script in this argument will return the values"},{"name":"returnTotal","description":"when set to true, the full result set will be iterated and the results returned (default is false)"},{"name":"rexster.returnKeys","description":"an array of element property keys to return (default is to return all element properties)"},{"name":"rexster.offset.start","description":"start index for a paged set of data to be returned"},{"name":"rexster.offset.end","description":"end index for a paged set of data to be returned"},{"name":"script","description":"the Gremlin script to be evaluated"}]},{"op":"POST","namespace":"tp","name":"gremlin","description":"evaluate an ad-hoc Gremlin script for a graph.","href":"http://localhost:8182/graphs/graph/tp/gremlin","title":"tp:gremlin","parameters":[{"name":"rexster.showTypes","description":"displays the properties of the elements with their native data type (default is false)"},{"name":"language","description":"the gremlin language flavor to use (default is groovy)"},{"name":"params","description":"a map of parameters to bind to the script engine"},{"name":"load","description":"a list of 'stored procedures' to execute prior to the 'script' (if 'script' is not specified then the last script in this argument will return the values"},{"name":"returnTotal","description":"when set to true, the full result set will be iterated and the results returned (default is false)"},{"name":"rexster.returnKeys","description":"an array of element property keys to return (default is to return all element properties)"},{"name":"rexster.offset.start","description":"start index for a paged set of data to be returned"},{"name":"rexster.offset.end","description":"end index for a paged set of data to be returned"},{"name":"script","description":"the Gremlin script to be evaluated"}]}

Ugly. Python to the rescue.

#!/usr/bin/env python

import simplejson
import sys
import yaml

print yaml.dump(simplejson.loads(str(sys.stdin.read())), default_flow_style=False)

Basically a one-liner.

# curl localhost:32791/graphs/graph | python json2yaml.py 
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3581    0  3581    0     0   552k      0 --:--:-- --:--:-- --:--:--  582k
extensions:
- description: evaluate an ad-hoc Gremlin script for a graph.
  href: http://localhost:8182/graphs/graph/tp/gremlin
  name: gremlin
  namespace: tp
  op: GET
  parameters:
  - description: displays the properties of the elements with their native data type
      (default is false)
    name: rexster.showTypes
  - description: the gremlin language flavor to use (default is groovy)
    name: language
  - description: a map of parameters to bind to the script engine
    name: params
  - description: a list of 'stored procedures' to execute prior to the 'script' (if
      'script' is not specified then the last script in this argument will return
      the values
    name: load
  - description: when set to true, the full result set will be iterated and the results
      returned (default is false)
    name: returnTotal
  - description: an array of element property keys to return (default is to return
      all element properties)
    name: rexster.returnKeys
  - description: start index for a paged set of data to be returned
    name: rexster.offset.start
  - description: end index for a paged set of data to be returned
    name: rexster.offset.end
  - description: the Gremlin script to be evaluated
    name: script
  title: tp:gremlin
- description: evaluate an ad-hoc Gremlin script for a graph.
  href: http://localhost:8182/graphs/graph/tp/gremlin
  name: gremlin
  namespace: tp
  op: POST
  parameters:
  - description: displays the properties of the elements with their native data type
      (default is false)
    name: rexster.showTypes
  - description: the gremlin language flavor to use (default is groovy)
    name: language
  - description: a map of parameters to bind to the script engine
    name: params
  - description: a list of 'stored procedures' to execute prior to the 'script' (if
      'script' is not specified then the last script in this argument will return
      the values
    name: load
  - description: when set to true, the full result set will be iterated and the results
      returned (default is false)
    name: returnTotal
  - description: an array of element property keys to return (default is to return
      all element properties)
    name: rexster.returnKeys
  - description: start index for a paged set of data to be returned
    name: rexster.offset.start
  - description: end index for a paged set of data to be returned
    name: rexster.offset.end
  - description: the Gremlin script to be evaluated
    name: script
  title: tp:gremlin
features:
  ignoresSuppliedIds: true
  isPersistent: true
  isWrapper: false
  supportsBooleanProperty: true
  supportsDoubleProperty: true
  supportsDuplicateEdges: true
  supportsEdgeIndex: false
  supportsEdgeIteration: true
  supportsEdgeKeyIndex: false
  supportsEdgeProperties: true
  supportsEdgeRetrieval: true
  supportsFloatProperty: true
  supportsIndices: false
  supportsIntegerProperty: true
  supportsKeyIndices: true
  supportsLongProperty: true
  supportsMapProperty: true
  supportsMixedListProperty: true
  supportsPrimitiveArrayProperty: true
  supportsSelfLoops: true
  supportsSerializableObjectProperty: true
  supportsStringProperty: true
  supportsThreadedTransactions: true
  supportsTransactions: true
  supportsUniformListProperty: true
  supportsVertexIndex: false
  supportsVertexIteration: true
  supportsVertexKeyIndex: false
  supportsVertexProperties: true
graph: titangraph[cassandrathrift:[127.0.0.1]]
name: graph
queryTime: 0.31277
readOnly: false
type: com.thinkaurelius.titan.graphdb.database.StandardTitanGraph
upTime: 0[d]:00[h]:31[m]:27[s]
version: 2.5.0

I love Python. YAML ain’t bad, either.

GitHub two factor authentication with IntelliJ

I’m a big fan of the IntelliJ products and derivatives, particularly Pycharm and Android Studio.

I also use two factor authentication (2fa) on every site that supports it. GitHub, no stranger to awesomeness, supports 2fa like a boss!

The easiest way to make your IntelliJ IDE jive with your 2fa-enabled GitHub account is to use personal API tokens. You have to be careful with these, because they’re a form of single-factor authentication, but since they’re long, random, and typically used for one purpose (i.e., you’re IDE), I think their overall impact to your account’s security is acceptable.

After you’ve created your personal API token (I used the default settings), open your settings dialog in your IntelliJ IDE. pycharm_github_settings

For “Auth Type” pick “Token.” Insert your token into the field, click “Test” to see if it worked, and you’re good to go!

Make new KVM VMs in less than 10 seconds

In the course of my day, I tend to spin up lots of VMs on my laptop. KVM is my hypervisor of choice, and since it supports libvirt, there are lots of great tools to make this easier. virt-manager is a nice GUI that’s very helpful for beginners. virt-install is my CLI tool of choice. But if you want to use dnsmasq for guest name resolution, and dhcp against libvirt networking, it can be a little tedious to type out everything over and over. So I decided to make a tool to save me some time and typing: kvminstall.

Hat tip to Rich Lucente who shared with me a bash script that inspired me to write kvminstall.

Installation

To install, use Python PIP. If you haven’t used this before, it’s easy to install with yum.

# yum install python-pip
# pip install kvminstall
# kvminstall --help
usage: kvminstall [-h] [-c CLONE] [-i IMAGE] [-v VCPUS] [-r RAM] [-d DISK]
                  [-D DOMAIN] [-N NETWORK] [--type TYPE] [--variant VARIANT]
                  [-f CONFIGFILE] [--verbose]
                  name

positional arguments:
  name                  name of the new virtual machine

optional arguments:
  -h, --help            show this help message and exit
  -c CLONE, --clone CLONE
                        name of the source logical volume to be cloned
  -i IMAGE, --image IMAGE
                        image file to duplicate
  -v VCPUS, --vcpus VCPUS
                        number of virtual CPUs
  -r RAM, --ram RAM     amount of RAM in MB
  -d DISK, --disk DISK  disk size in GB
  -D DOMAIN, --domain DOMAIN
                        domainname for dhcp / dnsmasq
  -N NETWORK, --network NETWORK
                        libvirt network
  --type TYPE           os type, i.e., linux
  --variant VARIANT     os variant, i.e., rhel7
  -f CONFIGFILE, --configfile CONFIGFILE
                        specify an alternate config file,
                        default=~/.config/kvminstall/config.yaml
  --verbose             verbose output

Configuration

In your .config directory, kvminstall sets up a yaml file with defaults. You can specify any of these interactively, or if you want to minimize typing, you can set these defaults in ~/.config/kvminstall/config:

---
vcpus: 1
ram: 1024
disk: 10
domain: example.com
network: default
mac: 5c:e0:c5:c4:26
type: linux
variant: rhel7

The MAC address can be specified as up to 5 :-delimited fields. If you want to specify fewer, kvminstall will auto-complete with random, available values.

Usage

The current version 0.1.3 supports only image-based installs — either by snapshotting an LVM volume, or by copying an image file. I intend to add kickstart and iso support, but hey, release early, release often.

Image File

Most people will probably want to copy an image file. Let’s assume that you’ve built a base image, and its root volume lives in /var/lib/libvirt/images/rhel71base.img. (Next post will be on building base images.) To create a new VM, based on that image, called ‘testvm’:

# kvminstall -c /var/lib/libvirt/images/rhel71base.img testvm

You’re mostly I/O bound here, as your copying rhel71base.img -> testvm.img. Shortly after that’s finished, you’ve got a new VM with all of your host and guest networking configured.

# virsh list
 Id    Name                           State
----------------------------------------------------
 2     testvm                         running

# grep testvm /etc/hosts
192.168.122.27	testvm.example.com testvm
# ssh testvm
Last login: Thu Aug 27 13:30:25 2015 from 192.168.122.1
[root@testvm ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 5c:e0:c5:c4:26:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.27/24 brd 192.168.122.255 scope global dynamic eth0
       valid_lft 2141sec preferred_lft 2141sec
    inet6 fe80::5ee0:c5ff:fec4:267a/64 scope link 
       valid_lft forever preferred_lft forever
# nslookup testvm.example.com
Server:		192.168.122.1
Address:	192.168.122.1#53

Name:	testvm.example.com
Address: 192.168.122.27

The guest networking has been setup with virsh. An available IP and MAC address has been automatically picked based on your DHCP scope. (In the next version I’ll add support for specifying an IP address.)

# virsh net-dumpxml default
<network connections='1'>
  <name>default</name>
  <uuid>431ea266-8584-4e10-866a-fc1a3ad419b5</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:d0:5e:a3'/>
  <dns>
    <host ip='192.168.122.27'>
      <hostname>testvm.example.com</hostname>
    </host>
  </dns>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
      <host mac='5c:e0:c5:c4:26:7a' name='testvm.example.com' ip='192.168.122.27'/>
    </dhcp>
  </ip>
</network>

The dnsmasq service will be automatically restarted after /etc/hosts is updated. This way, so long as your resolv.conf is set up properly in your base image, DNS hostname resolution will work in your guest network.

LVM Volume

Now I use LVM volumes on my laptop, served up from an M2.SATA drive. This gives me better I/O since I’ve split out host and guest storage devices. It’s also much faster to snapshot a base image’s root volume. Using kvminstall with an LVM snapshot, you can get VM creation time down to seconds. My LVM volume group is called libvirt_lvm.

# lvs
  LV                 VG          Attr       LSize   Pool Origin     Data%  Meta%  Move Log Cpy%Sync Convert
  home               fedora      -wi-ao---- 500.00g                                                        
  root               fedora      -wi-ao---- 366.82g                                                        
  swap               fedora      -wi-ao----  64.00g                                                        
  rhel71base         libvirt_lvm owi-a-s---  10.00g                                                        
# time kvminstall -c /dev/libvirt_lvm/rhel71base testvm

real	0m2.217s
user	0m1.012s
sys	0m0.218s
[root@w550 ~]# ssh testvm
Warning: Permanently added the ECDSA host key for IP address '192.168.133.164' to the list of known hosts.
Last login: Sat Aug  8 21:02:29 2015 from 192.168.133.1
[root@testvm ~]# exit
# lvs
  LV                 VG          Attr       LSize   Pool Origin     Data%  Meta%  Move Log Cpy%Sync Convert
  home               fedora      -wi-ao---- 500.00g                                                        
  root               fedora      -wi-ao---- 366.82g                                                        
  swap               fedora      -wi-ao----  64.00g                                                               
  rhel71base         libvirt_lvm owi-a-s---  10.00g                                                        
  testvm             libvirt_lvm swi-aos---  10.00g      rhel71base 0.06                                   

Upcoming features

It would be nice if we could — just as quickly — remove the VMs, or even reset them back to their base images. In the next version, expect kvmuninstall and kvmreset commands.

I’d love feedback. Please feel free to comment here or open issues on the GitHub project page.

Stay tuned for my next article on building base images for easy cloning.

Gluster on AWS performance analysis

A bunch of my customers use AWS, which is great. So do I. But there are a couple gotchas, like scalable NFS.

Don’t forget bzip2, or you’ll get this error:

    pts/fio-1.8.2:
        Test Installation 1 of 1
        1 File Needed [0.44 MB]
        Downloading: fio-2.1.13.tar.bz2                                      [0.44MB]
        Downloading .................................................................
        Installation Size: 4 MB
        Installing Test @ 21:28:58
            The installer exited with a non-zero exit status.
            ERROR: make: *** No targets specified and no makefile found.  Stop.
            LOG: /mnt/pts/fio-1.8.2/install-failed.log

Safely backing up EC2 to S3 with Boto and IAM

As a fan of Bruce Schneier, I like things that, as he would say, ‘fail well.’ So when I set out to back up my EC2 instance to S3, I wanted to find a way in which my instance could be compromised (somewhat likely), but my backups would be safe from malicious deletion or modification (disastrous).

AWS has a service called Identity & Access Management (IAM), which brings Role Based Access Control (RBAC) to the AWS APIs. You can make IAM users, groups, and roles. Since I didn’t want my personal access and secret keys on my EC2 instance (that wouldn’t fail well!) I decided to make a new user, called jason-readonly.

My plan was to create a read-only user that could drop backups from EC2 into my S3 bucket, named jason-backups, but in such a way that anybody who had jason-readonly‘s key pair would be unable to delete or modify those backups.

From the IAM console, I crated a my jason-readonly user and added a User Policy that consisted of the following:

{
  "Version": "2012-12-26",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [ "s3:PutObject", "s3:ListBucket" ],
      "Resource": [ "arn:aws:s3:::jason-backups",
      "arn:aws:s3:::jason-backups/*" ]
    }
  ]
}

 

With that finished, I created a few scripts on my EC2 instance to handle the backing-up. First, a bash script to tar up my interesting directories and capture any MySQL content:

#!/bin/bash

DATE=`date '+%Y%m%d%H%M%S'`
BACKUP_DIR=/root/backups

SQL_FILE="${BACKUP_DIR}/mysql-${DATE}.sql"
TAR_FILE="${BACKUP_DIR}/jason-backup-${DATE}.tar.gz"

BACKUP_TARGETS=''
BACKUP_TARGETS="${BACKUP_TARGETS} /opt/foo/bar"
BACKUP_TARGETS="${BACKUP_TARGETS} /var/www/html"
BACKUP_TARGETS="${BACKUP_TARGETS} /etc/httpd"
BACKUP_TARGETS="${BACKUP_TARGETS} ${SQL_FILE}"

mysqldump --no-create-info --complete-insert --extended-insert=FALSE --compact --user='username' --password='changeme' redmine >> ${SQL_FILE}
tar czf ${TAR_FILE} ${BACKUP_TARGETS} >/dev/null 2>&1

 

Next, I wrote a Boto script to push the tarball into S3:

from boto.s3.connection import S3Connection
from boto.s3.key import Key
from datetime import datetime
import sys

tarball = open(sys.argv[1])
file_fqn_list = sys.argv[1].split('/')
date_string = datetime.now().strftime('%Y-%m-%d-%H:%M:%S')
key_name = date_string + '-' + file_fqn_list[3]

try:
    s3conn = S3Connection('my-access-key', 'my-secret-key')
    bucket = s3conn.get_bucket('jason-backups')
    key = Key(bucket, key_name)
    key.set_contents_from_file(tarball)
except Exception, e:
    print 'Exception caught:'
    print e
    sys.exit(1)

 

Note that I’m prepending a date/time stamp on the S3 object name. This will let me push the same file multiple times. I like this, but if you don’t it’s easy to remove.

Now we just have to automate it with cron. So into /etc/cron.daily I dropped a file called backup with 755 permissions. In it:

#!/bin/bash
sh /root/tarup.sh
python /root/backup.py `ls -1rt /root/backups/jason* | tail -1`
mkdir -p /root/backups/save
mv `ls -1rt /root/backups/jc.com* | tail -5 | tr 'n' ' '` /root/backups/save
rm -f /root/backups/*.tar.gz /root/backups/*.sql
mv /root/backups/save/* /root/backups/
rmdir /root/backups/save

 

There are lots of ways to do this. You could use deja, which is far more powerful, and you could be much smarter about keeping only the latest 5 backup files. This works for me, but I’d love to hear how you do it. Hit me up in the comments!

Backing up WordPress on OpenShift

When I logged into my WordPress admin page today, I saw a friendly message saying that it’s time to upgrade.

WordPress recommends that you backup before upgrading. If your blog is hosted on openshift.com like mine is, then here’s a process to backup your WordPress gear.

First, do a git clone to pull down your php environment. In this example, my WordPress gear is called ‘blog’.

[jason@localhost ~]$ rhc git-clone blog
Cloning into 'blog'...
Your application Git repository has been cloned to '/home/jason/blog/blog'

Next, you need to back up your MySQL database. The general process for this is

  1. SSH into your gear
  2. Create a temp directory if one doesn’t already exist
  3. mysqldump your WordPress database using the OpenShift environment variables
  4. SCP your dump to a backup location

So here we go…
 

[jason@localhost ~]$ cd blog
[jason@localhost blog]$ rhc ssh blog
[blog-callaway.rhcloud.com 51b4c584500446eb79000070]> mkdir app-root/data/tmp
[blog-callaway.rhcloud.com 51b4c584500446eb79000070]> mysqldump --user="${OPENSHIFT_MYSQL_DB_USERNAME}" --password="${OPENSHIFT_MYSQL_DB_PASSWORD}" --host="${OPENSHIFT_MYSQL_DB_HOST}" --port="${OPENSHIFT_MYSQL_DB_PORT}" --no-create-info --complete-insert --extended-insert=FALSE blog > app-root/data/tmp/wordpress.sql
[blog-callaway.rhcloud.com 51b4c584500446eb79000070]> exit
[jason@localhost blog]$ rhc apps
blog @ http://blog-callaway.rhcloud.com/ (uuid: 51b4c584500446eb79000070)
-------------------------------------------------------------------------
  Domain:          callaway
  Created:         Jun 09  2:12 PM
  Gears:           1 (defaults to small)
  Git URL:         ssh://51b4c584500446eb79000070@blog-callaway.rhcloud.com/~/git/blog.git/
  Initial Git URL: git://github.com/openshift/wordpress-example.git
  SSH:             51b4c584500446eb79000070@blog-callaway.rhcloud.com
  Deployment:      auto (on git push)
  Aliases:         blog.jasoncallaway.com

  php-5.3 (PHP 5.3)
  -----------------
    Gears: Located with mysql-5.1

  mysql-5.1 (MySQL 5.1)
  ---------------------
    Gears:          Located with php-5.3
    Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
    Database Name:  blog
    Password:       redacted
    Username:       redacted

You have 1 applications
[jason@localhost blog]$ scp 51b4c584500446eb79000070@blog-callaway.rhcloud.com:~/app-root/data/tmp/wordpress.sql .
wordpress.sql                                        100% 1288KB   1.3MB/s   00:00

 
A few notes about this approach:

  • It could be better automated by doing the mysqldump non-interactively
  • The mysqldump options omit the schema. If you want to grab both schema and content, remove the --no-create-info option
  • If you wanted to restore, you’d do a git push from your cloned directory, then scp the saved sql, ssh in, and then load the sql like this:
    • mysql --user="${OPENSHIFT_MYSQL_DB_USERNAME}" --password="${OPENSHIFT_MYSQL_DB_PASSWORD}" --host="${OPENSHIFT_MYSQL_DB_HOST}" --port="${OPENSHIFT_MYSQL_DB_PORT}" blog < ~/app-root/data/tmp/wordpress.sql
  • There are probably more clever ways to do this. This process was the first one that jumped into my head
    •  
      If you have better ways of backing up WordPress, sound off!