Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
OS X Open Source Python Unix BSD IT Linux

BorgBackup 1.0.0 Released (github.com) 64

An anonymous reader writes: After almost a year of development, bug fixing and cleanup, BorgBackup 1.0.0 has been released. BorgBackup is a fork of the Attic-Backup project — a deduplicating, compressing, encrypting and authenticating backup program for Linux, FreeBSD, Mac OS X and other unixoid operating systems (Windows may also work using CygWin, but that is rather experimental/unsupported). It works on 32bit as well as on 64bit platforms, x86/x64 and ARM CPUs (maybe as well on others, but these are the tested ones). For Linux, FreeBSD and Mac OS X, there are single-file binaries which can be just copied onto a system and contain everything needed (Python, libraries, BorgBackup itself). Of course, it can be also installed from source. BorgBackup is FOSS (BSD License) and implemented in Python 3 (91%), speed critical parts are in C or Cython (9%).
This discussion has been archived. No new comments can be posted.

BorgBackup 1.0.0 Released

Comments Filter:
  • by Anonymous Coward

    Couldn't all this be done with some shell scripts?

    • by Anonymous Coward

      Couldn't all this be done with some shell scripts?

      Block-level deduplication, incremental backups and FUSE mounting of the repository? Good luck with that.

  • ...will be assimilated!

    Not for me, thank you. Plain old rsync works great.
    • Either that or it will play tennis really well.
    • Plain old rsync works great

      Sorry, rsync has a number of issues that make it unacceptable (IMO) for backup. Among other reasons, it doesn't preserve metadata, and the rsync people think that's the correct behavior,

      Don't get me wrong: I use rsync all the time, but never for backup.

      • Owner, group, mode, acls, xattrs, times... What metadata doesn't it backup? And what are the other issues? Genuinely interested as I do use it for hardlink incremental backups all the time.
  • by mlts ( 1038732 ) on Sunday March 06, 2016 @01:17PM (#51648873)

    There is a definite place for Borgbackup, attic, bup, obnam, zbackup and other deduplicating backup utilities. The ability to just toss data whenever you feel like it, and only deltas get saved (after being compressed) is a nice thing. Same with having decent encryption.

    I personally have been using zbackup for a while, which is quite usable for backups, especially via SSH, where it can SSH into my NAS, fetch data, and only store what is changed to some media I rotate out for safekeeping. Zbackup has not had much Git activity, but Borgbackup has had an extreme amount of work done with it, so it is definitely a utility to watch and consider using.

    • by sr180 ( 700526 )

      Ive been using BackupPC with compression and deduplication for well over 10 years now. Current pools stats show 35TB of backups compress down to 4TB in my pool.

  • Since data capacity has outpaced data rate by many orders of magnitude, anyone trying to maintain terabytes of data can find himself in an awkward situation where the time to create a backup exceeds a desired backup interval. Real-time mirroring or other fault tolerance scheme might become the only reasonable solution to data assurance. If very large numbers of files are involved and an ongoing change log isn't maintained by the file system, then even incremental or differential backups become a time-cons

    • by Anonymous Coward
      Real-time mirroring or other fault tolerance scheme might become the only reasonable solution to data assurance.

      They're not reasonable solutions. An accidental "rm -rf /" will see its effects rippled into the "backup" in real time.
      • by macraig ( 621737 )

        Then put in safeguards against careless foolhardy use. What part of the chasm between capacity and rate didn't you grasp? Don't count on solid state storage to be the messiah, either: we now have a 15TB capacity SSD that again provides a disproportionately smaller increment in rate. Rate will never catch up to capacity.

      • by tlhIngan ( 30335 )

        Real-time mirroring or other fault tolerance scheme might become the only reasonable solution to data assurance.

        They're not reasonable solutions. An accidental "rm -rf /" will see its effects rippled into the "backup" in real time.

        And what's wrong with that? Because if you wanted to undo that, you just use the backup of the filesystem as it was before you executed the fateful command.

        Are we stuck in the thinking that we can only have one complete copy of something? Must suck for daily backups where

    • mirroring only protects against hardware failure. if you have a software error, you get hacked, the OS decided to implode that day....etc, the damage will be on both drives at the same time. My boss found that one out.

      He was in the "drive mirror is the best backup" camp, I'm firmly in the "sync and unplug a USB HD". his version of windows deep-6'ed itself, his drives were past recovery with common tools, and had to start over. lost all his business e-mail, and a lot of important documents, thankfully, he
      • by macraig ( 621737 )

        i find it frustrating that the only cost effective way to backup a current hard disk is via another hard disk.

        That at least hints at the problem. RATE MATTERS. If one has a 20TB media server at home, how long does it take to simply make a non-incremental disk-to-disk backup? Don't count on solid state storage to be the messiah, either: we now have a 15TB capacity SSD that again provides a disproportionately smaller increment in rate. Rate will never catch up to capacity. That is the problem. If the cha

      • by epine ( 68316 )

        mirroring only protects against hardware failure

        Wrong. A COW mirror with automatic snapshots protects against many other scenarios, and most (but not all) hardware failures. A COW mirror with frequent scheduled incremental snapshot replication to a remote location protects against just about everything, with no USB drives involved.

        Unfortunately, COW mirrors won't win any write performance benchmarks against XFS, as the internal write path tends to be far more complex. But seriously, use the Volvo for 90%

    • For this scenario, there are continuous backup systems. Any change made to the file system is also written to the backup system. The backup system can then build virtual full backups offline.
    • The only solution for that is actual block level backups. This requires file system support to be able to snapshot and/or indicate blocks that have changed. It still takes a week or so to take the initial backup but after that, you could take a snapshot every 5m and replicate just the changes across multiple volumes. If your ingress of data exceeds your capacity of egress to a backup however, then you need to rethink your architecture.

      I have a system that can take hours just to transverse and read the metad

      • by jabuzz ( 182671 )

        I suggest that you get a better filesystem them. IBM's SpectrumScale (aka GPFS) while I admit costs money would take minutes to transverse the metadata of even hundreds of millions of files.

        • by guruevi ( 827432 )

          Transversing metadata is not my primary concern. It only happens once every few months when someone forgot where they put their files and it's usually deductible in other ways. There are better ways to spend my money than expensive software though, I think a license for this alone would cost as much as the storage array itself, if I wanted to spend that much money, I'd just invest in all-flash storage.

  • A few weeks ago I cut the cord and migrated away from Windows to Linux (mint). Was using SyncBack to backup my files, now I need to find a new solution.

    I'm on my 5th package, because the first 4 were screwy in various ways. The default backup tool doesn't save profiles, so you have to type in the source and destination every goddamn time. (But when you do, it *does* work.)

    "BackInTime" apparently allows multiple profiles, so I created a profile and hit "close" and got the error "default profile source direct

    • by Anonymous Coward

      Take a close look at 'rsync' and then write a script that uses it to do what you want.

      It even allows you to do versioned backups once you understand how to use the '--link-dest=' option properly.

  • by short ( 66530 ) on Sunday March 06, 2016 @03:42PM (#51649513) Homepage
    How does it compare to duplicity [nongnu.org]?
  • by rklrkl ( 554527 ) on Sunday March 06, 2016 @05:49PM (#51650079) Homepage

    This might be fine where you've got a single Linux machine and, say, backup to an external USB3 hard drive, but what about bigger setups than this? For example, multiple Windows/Linux client machines to backup and a central server with an autoloader/barcoded Ultrium tape drive attached? There's very few open source solutions that deal with this in a heterogeneous environment (Amanda - which is poor with Windows clients - and Bacula - which is ridiculously complex to setup - are just about the only two that spring to mind). Until BorgBackup can do something similar, it's not really useful in a multi-machine/autoloader setup (no, I don't want to install two backup systems on every client...).

  • I have been using Borg backup for a few months. I absolutely love it. Before borg, I had a nightmare backup scheme. I have a lot of data. And I cannot backup all of it every week. It would require too much storage. I got a little taste of deduplicated backup with the backup tool Microsoft includes in Windows Server 2012. I was immediately hooked. But it has severe limitations. I wanted a very flexible backup program that did deduplication well. In my opinion, there is nothing else that even comes

  • BorgBackup 1.0.0, so is that Locutus? When will version 7.9 be released?

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...