Introduction BackupPC_ovz is a script that adds OpenVZ integration to BackupPC (BackupPC). BackupPC has no problems backing up an OpenVZ (ovz) Hardware Node (HN) or an ovz Virtual Environment (VE), but by making BackupPC aware of ovz's internals, the backup of VEs can be made far more efficient. BackupPC_ovz adds the following capabilities to BackupPC: * VE backups are taken from a snapshot of the VE's filesystem after the VE has been shut down. This guarantees that the filesystem data are in a consistent state without requiring application specific backup or pre-backup processing activities. * The VE is shut down only long enough to snapshot its filesystem, then is automatically restarted. Typical VE downtime will be 30 seconds, depending upon the amount of application-dependent processing occurs at shutdown and startup. * Both rsync and tar BackupPC XferMethods are supported. Because the backup and restore agent processes actually run on the HN hosting the VE, direct restore from BackupPC's web interface can be used to do a 'bare metal' recovery of the VE. * Any time the VE's /etc directory is backed up, the backup will add an /etc/vzdump directory containing the VE's configuration on the HN, notably the $VEID.conf file. * The VE is configured as if it were any other server to be backed up, with the notable addition of the BackupPC_ovz command to its client backup and restore commands, etc. * Although VE backups are actually performed by the HN, BackupPC_ovz determines the VE <-> HN mapping just before each backup run, eliminating any static mapping requirement in the BackupPC configuration. It is acceptable to periodically rebalance VE's using the ovz vzmigrate utility, as BackupPC_ovz will correctly locate a moved VE at the next backup. Requirements BackupPC_ovz requires that the HN be set up correctly as if it were to be a server backed up by BackupPC. Specifically, this means a recent version of rsync (we currently use 3.0.0pre6) and an ssh public key installed into the HN root user's .ssh/authorized_keys2 file. The companion private key, as usual, belongs to the backuppc user on the BackupPC server. Additionally, BackupPC requires that the private storage area, $VE_PRIVATE, for the VE to be backed exists on a file system hosted on an LVM logical volume (LV). There are no restrictions imposed by BackupPC_ovz on the filesystem used, as long as it is mountable by the HN, which by definition it must be. Limitations BackupPC_ovz imposes certain limitations. Primary of these is allowing only a single VE backup from a given HN at any time. Other VE backups attempting to run while an existing VE backup is in progress will error and BackupPC will fail, indicating an inability to retrieve the file list. This is not a catastrophic problem, as BackupPC will reschedule another backup attempt at a later time. The reason for this limitation is primarily to simplify the first releases of BackupPC_ovz. It would be possible to extend BackupPC_ovz to remove this limitation. However, this would only be useful in environments running very large HNs, one or more BackupPC servers, and gigabit or higher network speeds. BackupPC_ovz uses LVM2, which must be installed on each HN. All VE private storage areas must be on filesystem(s) hosted on LVM LVs. Each HN must have perl installed, including the Proc::PID::File page, which is installed in Ubuntu by installing the libproc-pid-file-perl apt package. VE host names in BackupPC must equate to the exact hostname as returned for the VE's primary IP address from the DNS server that BackupPC and all HNs use. In other words, "host " returns an IP addr, and "host IPaddr" returns . It is that exactly that must be used as the VE host name in BackupPC. In our environment, DNS returns fully qualified host names, so therefore the hosts in our BackupPC configuration are named with their fully qualified domain names. Installation To install BackupPC_ovz: * Install BackupPC as normal. * Configure all HNs as Hosts (backup/restore targets in BackupPC) per normal BackupPC instructions. Until the HNs can be succesfully backed up and restored, these operations cannot be successfully completed on any VEs. We reccommend the rsync or tar XferMethods, using ssh as a transport. Only the rsync method has been tested at this time. * Install a recent version of rsync (we use 3.0.0pre6+) into each VE. Note that a recent version of rsync is also required to successfully perform online migration, a very useful ovz function. * Install BackupPC_ovz into /usr/bin of each HN. The owner should be root, the group root and file permissions 0755. * Create the first VE Host in BackupPC. Set its XferMethod to rsync (or tar). Three changes to the host specific configuration are required to use BackupPC_ovz: - On the Backup Settings page, set the PreDumpUserCommand to: /usr/bin/BackupPC_ovz refresh - On the Xfer page, add the following to the beginning of the RsyncClientCmd field. Do no change what is already present in that field: /usr/bin/BackupPC_ovz server - On the Xfer page, add the following to the beginning of the RsyncClientRestoreCmd field. Do no change what is already present in that field: /usr/bin/BackupPC_ovz server restore * To add subsequent VE's to BackupPC, add each new VE into BackupPC using its NEWHOST=COPYHOST mechanism,as documented on the Edit Hosts page. This will automatically copy the modifications made for an existing VE host into a new VE host. Using BackupPC with VEs Once a VE has been added as a host to BackupPC, BackupPC will automatically schedule the first and each subsequent backup according to the defined backup schedule(s). Backups of a VE are no different in terms of BackupPC usage that any other host. Restoring files and directories from BackupPC to a VE also works just like it would with a normal host. Using the BackupPC web interface, select a backup, select the files or directories desired, click Restore, then use the Direct Restore option, or any other that better suits your needs. Special recovery features of VEs under BackupPC Because BackupPC actually backs up and recovers VE data using its hosted HN, additional recovery features are available. For example, a VE can be recovered in its entirey, analogous to a 'bare metal' recovery of a physical server: * Stop the VE to be fully recovered, if it is running. * Using BackupPC, select all files and directories of the appropriate VE backup and use Direct Restore to restore everything. * After the restore is complete, recover the ovz-specific VE configuration files from the VE's /etc/vzdump directory into the appropriate locations of the HN's /etc/vz/conf directory. There is nothing to do if these configuration files have not been changed (aka vzctl set). * Start the VE using ovz's vzctl utiltiy. The above strategy works great to restore an existing VE to a prior state. Using the rsync xfer method for recovery, a delta recovery is performed, dramatically reducing the recovery time. What happens if we need to recover a VE where no existing version of the VE is running anywhere? Consider a disaster recovery case where the HN hosting the VE melted and is completely unrecoverable. We can then use a similar process as above to recover the VE to another HN -- even one that had never hosted the VE before. * Using BackupPC, select all files and directories of the appropriate VE backup and use Direct Restore to restore everything. - Restore NOT to the VE host, but to the HN host that will host the newly recovered VE. - In the Direct Restore dialog, select the appropriate HN filesystem location to restore the VE. For example, if recovering VE with VEID 123, the recovery directory may be /var/lib/vz/private/123. * Create an empty /var/lib/vz/root/123 directory on the HN. * After the restore is complete, recover the ovz-specific VE configuration files from the VE's /etc/vzdump directory into the appropriate locations of the HN's /etc/vz/conf directory. There is nothing to do if these configuration files have not been changed (aka vzctl set). * Start the VE using ovz's vzctl utiltiy.