upon the amount of application-dependent processing occurs at shutdown and
startup.
- * Both rsync and tar BackupPC XferMethods are supported. Because the backup
- and restore agent processes actually run on the HN hosting the VE, direct
- restore from BackupPC's web interface can be used to do a 'bare metal'
- recovery of the VE.
+ * Currently, only the rsync BackupPC XferMethod has been tested, but tar
+ should probably work. Because the backup and restore agent processes
+ actually run on the HN hosting the VE, direct restore from BackupPC's web
+ interface can be used to do a 'bare metal' recovery of the VE.
* Any time the VE's /etc directory is backed up, the backup will add an
/etc/vzdump directory containing the VE's configuration on the HN, notably
periodically rebalance VE's using the ovz vzmigrate utility, as BackupPC_ovz
will correctly locate a moved VE at the next backup.
+ * BackupPC_ovz refreshConfig() copies itself to the known OpenVZ hardware
+ nodes. This means that BackupPC_ovz upgrades must happen on the BackupPC
+ server (in our case, an OpenVZ container/VE).
+
Requirements
BackupPC_ovz requires that the HN be set up correctly as if it were to be a
catastrophic problem, as BackupPC will reschedule another backup attempt at a
later time. The reason for this limitation is primarily to simplify the first
releases of BackupPC_ovz. It would be possible to extend BackupPC_ovz to
-remove this limitation. However, this would only be useful in environments
-running very large HNs, one or more BackupPC servers, and gigabit or higher
-network speeds.
+remove this limitation. For smaller environments, this limitation shouldn't
+pose a problem.
BackupPC_ovz uses LVM2, which must be installed on each HN. All VE private
storage areas must be on filesystem(s) hosted on LVM LVs.
* Install BackupPC as normal.
- * Configure all HNs as Hosts (backup/restore targets in BackupPC) per normal
- BackupPC instructions. Until the HNs can be succesfully backed up and
- restored, these operations cannot be successfully completed on any VEs. We
- reccommend the rsync or tar XferMethods, using ssh as a transport. Only
- the rsync method has been tested at this time.
-
- * Install a recent version of rsync (we use 3.0.0pre6+) into each VE. Note
- that a recent version of rsync is also required to successfully perform
- online migration, a very useful ovz function.
+ * Install a recent version of rsync (we use 3.0.0pre6+) into each HN and the
+ BackupPC's VE. A recent version of rsync is also required inside any VE
+ that may be online migrated via vzmigrate when shared storage is not in use.
+ (In otherwords, install rsync 3.0.0pre6+ on all HN's and inside all VEs).
+
+ NOTE: BackupPC doesn't have to be installed in a VE; it could be installed
+ on a separate server. Do not install BackupPC or any other application
+ directly onto an HN (see the BackupPC FAQs).
+
+ * Configure all HNs as Hosts (backup/restore targets in BackupPC) per standard
+ BackupPC instructions and verify that backup and restore operations work
+ correctly. Until the HNs can be succesfully backed up and restored,
+ operations on VE's cannot be successfully completed. We reccommend the
+ rsync or tar XferMethods, using ssh as a transport. Only the rsync method
+ has been tested at this time.
+
+ - On the BackupPC SSH FAQ, there are instructions for installing an SSH
+ key on servers to be backed up by BackupPC. Because VEs are actually
+ backed up and restored through the context of the hardware node (HN),
+ only the HNs need to have the keys. Keys do NOT need to be installed
+ in the VEs themselves.
+
+ * Create the file /etc/backuppc/BackupPC_ovz.hnlist. Its contents should
+ contain the fully qualified hostname of each HN. An example:
+
+ ---- /etc/backuppc/BackupPC_ovz.hnlist ----
+ # List the HNs by hostname of IP address in this file.
+ pe18001.mydomain.com
+ pe18002.mydomain.com
+ pe18003.mydomain.com
+ ---- end of file ----
* Install BackupPC_ovz into /usr/bin of each HN. The owner should be root,
the group root and file permissions 0755.
BackupPC_ovz:
- On the Backup Settings page, set the DumpPreUserCommand and
- RestorePreUserCommand fields to:
+ RestorePreUserCommand fields to contain:
/usr/bin/BackupPC_ovz refresh
- - On the Xfer page, add the following to the beginning of the RsyncClientCmd
- field. Do no change what is already present in that field:
+ - On the Xfer page, add:
/usr/bin/BackupPC_ovz server
+ to the beginning of the RsyncClientCmd field without altering the field's
+ contents in any other way. Our VE's have this data in the RsyncClientCmd
+ field:
+ /usr/bin/BackupPC_ovz server $host $sshPath -q -x -l root
+ $host $rsyncPath $argList+
- - On the Xfer page, add the following to the beginning of the
- RsyncClientRestoreCmd field. Do no change what is already present in
- that field:
+ - On the Xfer page, add
/usr/bin/BackupPC_ovz server restore
-
- * To add subsequent VE's to BackupPC, add each new VE into BackupPC using its
- NEWHOST=COPYHOST mechanism,as documented on the Edit Hosts page. This will
+ to the beginning of the RsyncClientRestoreCmd field without altering the
+ field's contents in any other way. Our VE's have this data in the
+ RsyncClientRestoreCmd field:
+ /usr/bin/BackupPC_ovz server restore $host $sshPath -q -x -l root
+ $host $rsyncPath $argList+
+
+ * To add subsequent VE's to BackupPC, add each new VE into BackupPC using the
+ NEWHOST=COPYHOST mechanism, as documented on the Edit Hosts page. This will
automatically copy the modifications made for an existing VE host into a
new VE host.
Once a VE has been added as a host to BackupPC, BackupPC will automatically
schedule the first and each subsequent backup according to the defined backup
-schedule(s). Backups of a VE are no different in terms of BackupPC usage that
-any other host.
-
-Restoring files and directories from BackupPC to a VE also works just like it
-would with a normal host. Using the BackupPC web interface, select a backup,
-select the files or directories desired, click Restore, then use the Direct
-Restore option, or any other that better suits your needs.
+schedule(s). Backups of restores to a running VE are no different in terms of
+BackupPC usage than any other host.
Special recovery features of VEs under BackupPC
-Because BackupPC actually backs up and recovers VE data using its hosted HN,
+Because BackupPC actually backs up and recovers VE data using its 'parent' HN,
additional recovery features are available. For example, a VE can be
recovered in its entirey, analogous to a 'bare metal' recovery of a physical
server:
- * Stop the VE to be fully recovered, if it is running.
+ * Stop the VE to be fully recovered using vzctl, if it is running.
* Using BackupPC, select all files and directories of the appropriate VE
backup and use Direct Restore to restore everything.
+ - Restore NOT to the VE host, but to the HN host that will host the newly
+ recovered VE.
+ - In the Direct Restore dialog, select the appropriate HN filesystem
+ location to restore the VE. For example, if VE 123 has its private data
+ at /var/lib/vz/private/123 on the HN, then the recovery directory would be
+ /var/lib/vz/private/123.
* After the restore is complete, recover the ovz-specific VE configuration
files from the VE's /etc/vzdump directory into the appropriate locations
- of the HN's /etc/vz/conf directory. There is nothing to do if these
- configuration files have not been changed (aka vzctl set).
+ of the HN's /etc/vz/conf directory. This is only required if the config
+ file(s) has(have) changed.
* Start the VE using ovz's vzctl utiltiy.
-The above strategy works great to restore an existing VE to a prior state.
-Using the rsync xfer method for recovery, a delta recovery is performed,
-dramatically reducing the recovery time.
+The above strategy works great to restore an existing VE to a prior state, as
+the rsync xfer method will not overwrite files that are the same, reducing I/O
+and therefore recovery time.
What happens if we need to recover a VE where no existing version of the VE
is running anywhere? Consider a disaster recovery case where the HN hosting
-the VE melted and is completely unrecoverable. We can then use a similar
-process as above to recover the VE to another HN -- even one that had never
+the VE melted and is completely unrecoverable. We then use the same
+process as above to recover the VE to an HN -- even one that might never have
hosted the VE before.
* Using BackupPC, select all files and directories of the appropriate VE
* Create an empty /var/lib/vz/root/123 directory on the HN.
* After the restore is complete, recover the ovz-specific VE configuration
files from the VE's /etc/vzdump directory into the appropriate locations
- of the HN's /etc/vz/conf directory. There is nothing to do if these
- configuration files have not been changed (aka vzctl set).
+ of the HN's /etc/vz/conf directory.
* Start the VE using ovz's vzctl utiltiy.
+
+Configurations Tested:
+
+ * Twin Dell PowerEdge 1800 servers (the HNs).
+ Running a minimal Ubuntu Gutsy server OS.
+ Raid disks running under LVM2.
+ XFS filesystem for VE private areas.
+ * BackupPC running as a VE on one of the PE1800's.
+ Running version 3.0.0-ubuntu2
+ * A number of other VE's distributed between the two PE1800's.
+ * VEs running various OS's: Mandrake 2006, Ubuntu Feisty, Ubuntu Gutsy.
+ * VE backups and restores using the rsync method.
+ * All VEs and HNs have rsync 3.0.0pre6 or newer installed.
+