I recently went through the process of trying to move my entire 1.5 Tb BackupPC tree to a new drive.  Here are some thoughts and comments from that experience.

  • I spent 40 days (literally) attempting to get various combinations of rsync, tar, cp, etc. to clone the contents of the drives to a new larger drive.  However, the bazillion of small little files hard-linked to a pool of randomly named actual files made this practically impossible to do in a finite amount of time.
  • In the end I used the unix utility ‘dd’ for the fastest possible copying.
  • In order to clone to a larger drive I first ran resize2fs to make the target file system match the size of the source file system.  Then I could do a direct dd copy of the source disk, and finally I resized the file system up to consume the full physical space after the clone was complete.
  • Make sure to add “conv=noerror,sync” to your “dd” options to avoid your transfer dying if it hits a bad-block on the source disk (perhaps a dying drive is among the reasons you are transferring to a new drive?)

Reasons to clone your backuppc drive (or any drive)

  • Your data is beginning to outgrow your available space.
  • The drive is starting to fail (showing smart sector errors, read errors, etc.)
  • Backup/redundancy
  • Prepare a copy of your data for offsite storage (extra safety for your important data, like your life long collection of digital photos …)

Cloning a BackupPC storage tree (or any other file system structure that is too big/complex for rsync/tar/cp to handle efficiently)

  1. Physically attach the destination drive and create physical and logical volumes. “system-config-lvm” is a gui tool that can help do this, otherwise there is the myriad of “pv…” and “lv…” command line tools if you wish to go that route.
  2. Make (or resize) the destination logical volume so its size matches as closely as possible the size of the source volume. I wasn’t able to get it exact, but I forged ahead anyway and it appears that e2fsck and resize2fs were able to get it all sorted out properly after the fact.  Perhaps making your target volume just slightly larger would be safer than making it slightly smaller.
  3. Make sure the dest volume is not mounted! If you have the option, also unmount the source volume. This isn’t absolutely required, but will avoid the risk of copying a drive in an inconsistent state which could lead to some loss of files or data that are being written at the time of the dd.
  4. Run “dd if=/dev/mapper/source_lv of=/dev/mapper/dest_lv bs=100M conv=noerror,sync”  I can’t say what the optimal block size (bs=xxx) value is. Make that too small and you waste time making endless trips back and forth from one drive to the other. Make it too big and you might get into swap. There may be a specific value that runs faster with your hardware than some other value and that might be non-intuitive?
  5. “dd” has no output unless there is an error and on a 1 TB drive or larger this can literally run for many hours. You can find the pid of the dd process and run “kill -USR1 ” and that will signal dd to dump out a status message of how much it has copied so far. With a few brain cells you can figure out how many total blocks there are to copy (at your specified block size) and get an rough estimate of when the copy will finish.
  6. After the “dd” command completes, run “e2fsck -f -y /dev/mapper/dest_lv”. If you were dd’ing a live drive, or if the source drive had some bad/unreadable blocks, or you couldn’t make a destination volume of the exact size of the original, or … This will (or should) bring the destination volume into full consistency with itself.  The end result is pretty much the best possible copy you can get from your source drive.
  7. Now the beautiful part: either by system-config-lvm or with a cli tool like “lvextend” you can now resize your logical volume to fill up the entire available physical space. system-config-lvm will run e2fsck (again) and resize2fs in the background so it may take some time.
  8. The gui tools makes things a bit ‘easier’ but they are silent in their output so you don’t know what’s going on or how long your operation may take (seconds? hours? days?) The command line tools output useful information and can run in ‘verbose’ mode so it may be worth it to pull up the man pages on them and run them directly depending on your level of interest and time available.

BackupPC specifics

  • I mount /dev/mapper/logical_volume someplace like /backuppc03 and then I make /var/lib/BackupPC a symbolic link to “/backuppc03/BackupPC”.  So update this link to point to the dest drive, restart BackupPC and you should be fully up and running on the dest drive … now hopefully with more space and a new error free drive.
  • Hopefully if there were some drive read errors, they are few and come at non-critical locations … hopefully only corrupting some random unimportant file in some random unimportant backup that you will never need to restore.
  • If the drive is too shot and your previous backups too corrupted after the copy process, you may be better off just starting from scratch with a brand new backuppc installation and begin accumulating your backup history from this point forward.
  • One more tip. Gnome (and probably other desktops) have software that can show you your hard drive’s ‘smart’ status. In gnome this tool is called “Disks”. If the drive isn’t showing any smart status, you may wish to double check your bios settings (it’s possible to turn off smart in bios.) It’s good to look at your drive status once in a while to make sure it’s not starting to accumulate bad sectors.
  • The drive I just cloned and replaced was up to 283 bad sectors with about 6 bad block unrecoverable read errors. It was running at about 45C which is pretty hot.