Frequently Asked Questions
How can I automatically download backups to my local machine?
The best way to do this is to turn on the S3 upload feature. That way the backup is sent to an S3 account that you own every time. If you wanted the backups on your local server as well, you could run s3sync on it daily to sync up with the S3 account.
Another way to do this is to create a script that automatically downloads the backup each day. Backups may be accessed using a URL such as:
https://sub.repositoryhosting.com/projects/1/backups/2010/02/27/00
The URL contains the year, the month, the day, and then a 2-digit index. The index is 00 for the first backup created in a day, 01 for the second, etc. If you just have daily backups set up, then you will only have one backup in a day, and the index will always be 00.
We also provide a handy URL for retrieving the most recent backup for a project:
https://sub.repositoryhosting.com/projects/1/backups/latest
This allows you to create a simple script to download the backup each day. For example, on Linux you could use something like the following to download today's backup (the first command logs you in, the second downloads today's backup):
curl -sS -X POST 'https://sub.repositoryhosting.com/session' -d "username=myuser&password=mypass" -c cookies.txt curl -sS -L "https://sub.repositoryhosting.com/projects/1/backups/latest" -b cookies.txt -o "subdomain.`date +%Y-%m-%d`.tar.gz"
One of our customers has created a more customizable download script, which you can find on his blog here: http://samsalisbury.net/articles/repositoryhosting-backup-script/. Thank you Sam!