catastrophic problem, as BackupPC will reschedule another backup attempt at a
later time. The reason for this limitation is primarily to simplify the first
releases of BackupPC_ovz. It would be possible to extend BackupPC_ovz to
-remove this limitation. However, this would only be useful in environments
-running very large HNs, one or more BackupPC servers, and gigabit or higher
-network speeds.
+remove this limitation. For smaller environments, this limitation shouldn't
+pose a problem.
BackupPC_ovz uses LVM2, which must be installed on each HN. All VE private
storage areas must be on filesystem(s) hosted on LVM LVs.
reccommend the rsync or tar XferMethods, using ssh as a transport. Only
the rsync method has been tested at this time.
- * Install a recent version of rsync (we use 3.0.0pre6+) into each VE. Note
- that a recent version of rsync is also required to successfully perform
- online migration, a very useful ovz function.
+ * Install a recent version of rsync (we use 3.0.0pre6+) into each HN. Note
+ that a recent version of rsync is also required inside the VE to
+ successfully perform online migration without shared storage.
* Install BackupPC_ovz into /usr/bin of each HN. The owner should be root,
the group root and file permissions 0755.
/usr/bin/BackupPC_ovz refresh
- On the Xfer page, add the following to the beginning of the RsyncClientCmd
- field. Do no change what is already present in that field:
+ field without altering the field's contents in any other way:
/usr/bin/BackupPC_ovz server
- On the Xfer page, add the following to the beginning of the
- RsyncClientRestoreCmd field. Do no change what is already present in
- that field:
+ RsyncClientRestoreCmd field without altering the field's contents in any
+ other way:
/usr/bin/BackupPC_ovz server restore
- * To add subsequent VE's to BackupPC, add each new VE into BackupPC using its
- NEWHOST=COPYHOST mechanism,as documented on the Edit Hosts page. This will
+ * To add subsequent VE's to BackupPC, add each new VE into BackupPC using the
+ NEWHOST=COPYHOST mechanism, as documented on the Edit Hosts page. This will
automatically copy the modifications made for an existing VE host into a
new VE host.
Once a VE has been added as a host to BackupPC, BackupPC will automatically
schedule the first and each subsequent backup according to the defined backup
-schedule(s). Backups of a VE are no different in terms of BackupPC usage that
-any other host.
-
-Restoring files and directories from BackupPC to a VE also works just like it
-would with a normal host. Using the BackupPC web interface, select a backup,
-select the files or directories desired, click Restore, then use the Direct
-Restore option, or any other that better suits your needs.
+schedule(s). Backups of restores to a running VE are no different in terms of
+BackupPC usage than any other host.
Special recovery features of VEs under BackupPC
-Because BackupPC actually backs up and recovers VE data using its hosted HN,
+Because BackupPC actually backs up and recovers VE data using its 'parent' HN,
additional recovery features are available. For example, a VE can be
recovered in its entirey, analogous to a 'bare metal' recovery of a physical
server:
- * Stop the VE to be fully recovered, if it is running.
- * Using BackupPC, select all files and directories of the appropriate VE
- backup and use Direct Restore to restore everything.
+ * Stop the VE to be fully recovered using vzctl, if it is running.
+ * Using BackupPC, do a Direct Restore of the desired backup to the VE.
* After the restore is complete, recover the ovz-specific VE configuration
files from the VE's /etc/vzdump directory into the appropriate locations
of the HN's /etc/vz/conf directory. There is nothing to do if these
* Create an empty /var/lib/vz/root/123 directory on the HN.
* After the restore is complete, recover the ovz-specific VE configuration
files from the VE's /etc/vzdump directory into the appropriate locations
- of the HN's /etc/vz/conf directory. There is nothing to do if these
- configuration files have not been changed (aka vzctl set).
+ of the HN's /etc/vz/conf directory.
* Start the VE using ovz's vzctl utiltiy.