Backup Your Data

Don't wait until the time comes when you wish you had made a backup of your account.  You can easily perform a backup of all or parts of your account.

The Easy Way To Backup

Control Panel

You can use your Control Panel to easily perform a backup of your data.  Advantages of using your Control Panel to back include:

  • This backup file is updated daily.

  • The backup file includes your files, emails, etc.

  • Just point and click to download the backup file.


  • It's advisable to perform frequent backups.

  • Keep your backup files separate from one another for additional safety.

  • Check the backup that you download by unzipping it to verify its integrity as we do not claim any responsibility regarding the backup that you receive as there are too many variable outside of our control.

The Manual Way To Backup

If you would rather create your own backup and automate the manual backup process, the following instructions are for you.

tar Backup

You can manually perform your own tar command via SSH/Telnet which will create a compressed backup that you can download via FTP.

Log in to your account via SSH and to backup all the files on your account (including unread email, etc.), issue the following command...

tar -czf /home/username/backup.tar.gz /home/username

Make sure to replace username with your log in name.

This will create an archive (c), gzip it (z), and save it to a file (f) that is located in /home/username/ with the name of backup.tar.gz and it will backup the entire contents of the /home/username folder.

This file is located below the root so that no one else can access it via http/https.  This is done for security reasons.  You will need to download the file to your local PC via your FTP software.

If you just want to backup the files and folder located in your public_html folder, the correct command would be...

tar -czf /home/username/backup.tar.gz /home/username/public_html

Remember to delete the backup as while it is on your account, it counts as part of your allotted space.

Automating Your Backup Via Cron

You can also setup a cron job (chronologically running a program or script) to perform the backup.  This is a little more technical to perform, but it also puts you in control of what is backed up and how often.

1. First we must make a small shell script (the UNIX equivalent of a batch file) using an ASCII/Text editor...


rm /home/username/backup.tar.gz
tar -czf /home/username/backup.tar.gz /home/username

Of course, replace "username" with your username.

Save this with any filename, however the instructions below assume you named it  It must end with a .sh extension.

2. FTP the file to your account below the root (in you non-web space before the public_html).

3. You must enable the be to executable by chmoding it to 755.

4. There are two parts to this section, first while using the Control Panel, click on "cron" and in the boxes that are presented...

minute hour day of month month day of week
To perform a daily backup at 2 am enter: 0 2 * * *
To perform a weekly backup on Sunday at 2 am enter:
(Sun=0, Mon=1,Tue=2, etc.)
0 2 * * 0
To perform a monthly backup on the 1st of each month at 2 am enter: 0 2 1 * *

Finally, in the box labeled "command" you need to indicate the program which should run at the time and date you specify.  Type in this box the name of the backup shell script...


Please note:

  • Set the cron to perform the backup ONLY as frequently as you expect to download the backup.
  • Do NOT set the cron to perform the backup every day, and then only download the backup once a month.  This wastes valuable CPU time for your website and for others.  Please keep this in mind, otherwise we will limit cron access to a case-by-case basis.
  • WARNING:  You can quickly have your account suspended by overusing cron jobs.  Do NOT set a cron job to run any more often than 30 minutes.  Even then, it could lead to your account suspension due to over use of the server.  Remember, you are on a shared server with other people, keep this into consideration.

For more details and further explanation regarding Cron, please review our "Cron" help page.

Automated Solution

Download the World Wide Backup code from... (Under the Free Scripts Section)

You can configure it to do a number of different things and it works with cron - very useful and works like a charm.

Back Up Your MySQL Database - Option 1

You can backup your MySQL database using phpmyadmin (via the Control Panel) and do a data and table dump.  You can also run this from the command line or even make a cron job that does this for you once a week. Then you can login and download it when your ready.

Back Up Your MySQL Database - Option 2

The idea of modifying the cron job listed above is excellent, as it automates your backup procedure.

Simply modify the created above to...


rm /home/username/backup.tar.gz
cd /home/username
mysqldump --opt -p{DBpassword} -u {DBusername} -B {database1} {database2} ... {databaseN} > database.mysql
   (place the above two lines on ONE line)
tar -czf /home/username/backup.tar.gz /home/username

username = Your login ID.
Type "pwd" at the shell prompt after you log in for the exact path.
{database1} ... {databaseN}  = The names of the databases you want to back up.
Use the CPanel to determine this.
{DBpassword} and {DBusername} = The password and username you use for the databases.
(which were set up through the CPanel)

"Note that if you run mysqldump without --quick or --opt, mysqldump will load the whole result set into memory before dumping the result. This will probably be a problem if you are dumping a big database." -MySQL Manual

Back Up Your MySQL Database - Option 3

The following is a direct copy from the MySQL Manual.  You can read the instructions directly on the mySQL site or read them below...

"4.4.1 Database Backups

Because MySQL tables are stored as files, it is easy to do a backup. To get a consistent backup, do a LOCK TABLES on the relevant tables followed by FLUSH TABLES for the tables. See section 6.7.2 LOCK TABLES/UNLOCK TABLES Syntax. See section 4.5.3 FLUSH Syntax. You only need a read lock; this allows other threads to continue to query the tables while you are making a copy of the files in the database directory. The FLUSH TABLE is needed to ensure that the all active index pages is written to disk before you start the backup.

If you want to make a SQL level backup of a table, you can use SELECT INTO OUTFILE or BACKUP TABLE. See section 6.4.1 SELECT Syntax. See section 4.4.2 BACKUP TABLE Syntax.

Another way to back up a database is to use the mysqldump program or the mysqlhotcopy script. See section 4.8.5 mysqldump, Dumping Table Structure and Data. See section 4.8.6 mysqlhotcopy, Copying MySQL Databases and Tables.

Do a full backup of your databases:

shell> mysqldump --tab=/path/to/some/dir --opt --all
shell> mysqlhotcopy database /path/to/some/dir

You can also simply copy all table files (`*.frm', `*.MYD', and `*.MYI' files) as long as the server isn't updating anything. The script mysqlhotcopy does use this method.

Stop mysqld if it's running, then start it with the --log-update[=file_name] option. See section 4.9.3 The Update Log. The update log file(s) provide you with the information you need to replicate changes to the database that are made subsequent to the point at which you executed mysqldump.

If you have to restore something, try to recover your tables using REPAIR TABLE or myisamchk -r first. That should work in 99.9% of all cases. If myisamchk fails, try the following procedure (this will only work if you have started MySQL with --log-update, see section 4.9.3 The Update Log):

Restore the original mysqldump backup.

Execute the following command to re-run the updates in the binary log:
shell> mysqlbinlog hostname-bin.[0-9]* | mysql

If you are using the update log you can use:
shell> ls -1 -t -r hostname.[0-9]* | xargs cat | mysql

ls is used to get all the update log files in the right order.

You can also do selective backups with SELECT * INTO OUTFILE 'file_name' FROM tbl_name and restore with LOAD DATA INFILE 'file_name' REPLACE ... To avoid duplicate records, you need a PRIMARY KEY or a UNIQUE key in the table. The REPLACE keyword causes old records to be replaced with new ones when a new record duplicates an old record on a unique key value.

If you get performance problems in making backups on your system, you can solve this by setting up replication and do the backups on the slave instead of on the master. See section 4.10.1 Introduction.

If you are using a Veritas filesystem, you can do:
From a client (or Perl), execute: FLUSH TABLES WITH READ LOCK.
From another shell, execute: mount vxfs snapshot.
From the first client, execute: UNLOCK TABLES.
Copy files from snapshot.
Unmount snapshot." -MySQL Manual

[Manual Index]


Copyright 2000-2006 All Rights Reserved. Terms and Conditions of Use and Privacy Policy. - Your Success Is Our Success