catastrophic problem, as BackupPC will reschedule another backup attempt at a
later time. The reason for this limitation is primarily to simplify the first
releases of BackupPC_ovz. It would be possible to extend BackupPC_ovz to
-remove this limitation. However, this would only be useful in environments
-running very large HNs, one or more BackupPC servers, and gigabit or higher
-network speeds.
+remove this limitation. For smaller environments, this limitation shouldn't
+pose a problem.
BackupPC_ovz uses LVM2, which must be installed on each HN. All VE private
storage areas must be on filesystem(s) hosted on LVM LVs.
reccommend the rsync or tar XferMethods, using ssh as a transport. Only
the rsync method has been tested at this time.
- * Install a recent version of rsync (we use 3.0.0pre6+) into each VE. Note
- that a recent version of rsync is also required to successfully perform
- online migration, a very useful ovz function.
+ - On the BackupPC SSH FAQ, there are instructions for installing an SSH
+ key on servers to be backed up by BackupPC. Because VEs are actually
+ backed up and restored through the context of the hardware node (HN),
+ only the HNs need to have the keys. They do NOT need to be installed
+ in the VEs themselves.
+
+ * Install a recent version of rsync (we use 3.0.0pre6+) into each HN. Note
+ that a recent version of rsync is also required inside the VE to
+ successfully perform online migration without shared storage.
* Install BackupPC_ovz into /usr/bin of each HN. The owner should be root,
the group root and file permissions 0755.
Three changes to the host specific configuration are required to use
BackupPC_ovz:
- - On the Backup Settings page, set the PreDumpUserCommand to:
+ - On the Backup Settings page, set the DumpPreUserCommand and
+ RestorePreUserCommand fields to:
/usr/bin/BackupPC_ovz refresh
- On the Xfer page, add the following to the beginning of the RsyncClientCmd
- field. Do no change what is already present in that field:
+ field without altering the field's contents in any other way:
/usr/bin/BackupPC_ovz server
- On the Xfer page, add the following to the beginning of the
- RsyncClientRestoreCmd field. Do no change what is already present in
- that field:
+ RsyncClientRestoreCmd field without altering the field's contents in any
+ other way:
/usr/bin/BackupPC_ovz server restore
- * To add subsequent VE's to BackupPC, add each new VE into BackupPC using its
- NEWHOST=COPYHOST mechanism,as documented on the Edit Hosts page. This will
+ * To add subsequent VE's to BackupPC, add each new VE into BackupPC using the
+ NEWHOST=COPYHOST mechanism, as documented on the Edit Hosts page. This will
automatically copy the modifications made for an existing VE host into a
new VE host.
Once a VE has been added as a host to BackupPC, BackupPC will automatically
schedule the first and each subsequent backup according to the defined backup
-schedule(s). Backups of a VE are no different in terms of BackupPC usage that
-any other host.
-
-Restoring files and directories from BackupPC to a VE also works just like it
-would with a normal host. Using the BackupPC web interface, select a backup,
-select the files or directories desired, click Restore, then use the Direct
-Restore option, or any other that better suits your needs.
+schedule(s). Backups of restores to a running VE are no different in terms of
+BackupPC usage than any other host.
Special recovery features of VEs under BackupPC
-Because BackupPC actually backs up and recovers VE data using its hosted HN,
+Because BackupPC actually backs up and recovers VE data using its 'parent' HN,
additional recovery features are available. For example, a VE can be
recovered in its entirey, analogous to a 'bare metal' recovery of a physical
server:
- * Stop the VE to be fully recovered, if it is running.
- * Using BackupPC, select all files and directories of the appropriate VE
- backup and use Direct Restore to restore everything.
+ * Stop the VE to be fully recovered using vzctl, if it is running.
+ * Using BackupPC, do a Direct Restore of the desired backup to the VE.
* After the restore is complete, recover the ovz-specific VE configuration
files from the VE's /etc/vzdump directory into the appropriate locations
of the HN's /etc/vz/conf directory. There is nothing to do if these
* Create an empty /var/lib/vz/root/123 directory on the HN.
* After the restore is complete, recover the ovz-specific VE configuration
files from the VE's /etc/vzdump directory into the appropriate locations
- of the HN's /etc/vz/conf directory. There is nothing to do if these
- configuration files have not been changed (aka vzctl set).
+ of the HN's /etc/vz/conf directory.
* Start the VE using ovz's vzctl utiltiy.
+
+Configurations Tested:
+
+ * Twin Dell PowerEdge 1800 servers (the HNs).
+ Running a minimal Ubuntu Gutsy server OS.
+ Raid disks running under LVM2.
+ XFS filesystem for VE private areas.
+ * BackupPC running as a VE on one of the PE1800's.
+ * A number of other VE's distributed between the two PE1800's.
+ * VEs running various OS's: Mandrake 2006, Ubuntu Feisty, Ubuntu Gutsy.
+ * VE backups and restores using the rsync method.
+ * All VEs and HNs have rsync 3.0.0pre6 or newer installed.
+