BorgBackup 1.0.0 Released (github.com) 64
An anonymous reader writes: After almost a year of development, bug fixing and cleanup, BorgBackup 1.0.0 has been released. BorgBackup is a fork of the Attic-Backup project — a deduplicating, compressing, encrypting and authenticating backup program for Linux, FreeBSD, Mac OS X and other unixoid operating systems (Windows may also work using CygWin, but that is rather experimental/unsupported).
It works on 32bit as well as on 64bit platforms, x86/x64 and ARM CPUs (maybe as well on others, but these are the tested ones).
For Linux, FreeBSD and Mac OS X, there are single-file binaries which can be just copied onto a system and contain everything needed (Python, libraries, BorgBackup itself). Of course, it can be also installed from source.
BorgBackup is FOSS (BSD License) and implemented in Python 3 (91%), speed critical parts are in C or Cython (9%).
Re: (Score:2)
Re: (Score:2)
Python and C? (Score:1)
Couldn't all this be done with some shell scripts?
Re: (Score:1)
Re: (Score:1)
Couldn't all this be done with some shell scripts?
Block-level deduplication, incremental backups and FUSE mounting of the repository? Good luck with that.
Re: (Score:2)
I read this in the firehose yesterday.
Damn .. in my anger I missed seeing it in the firehose today. But there are still a lot of better choices of stories to post.
Re: (Score:2)
In addition "once in a while" does not seem to be very reassuring :) I have already paid a large sum for hard disk recovery service.
Since then I am a follower of the Tao of Backup [taobackup.com]/
Re: (Score:2)
A long time ago, I lost data. I lost a lot of data. At the time, I lost data that was more than ten years old. That was back around 2000. I do not lose data now. I have not lost meaningful data since. I will never lose data again.
I do not even store much data locally. Even though I am not home, it is pushed out to my home network. From there, it is mirrored and pushed out to disparate physical locations. I have hardware at friend's houses. I have hardware at other property that I own. I have hardware in my
Re: (Score:2)
Perhaps for a home user that's fine. But where you have to have long term archives and your managing multiple versions of files over time, cp -R doesn't quite cut the mustard.
Because "Borg" (Score:2)
Why the fuck do I care about BorgBackup?
Because: ... Star Trek
* Saying "Borg" sounds nerdy.
* Saying "Borg" sounds funny.
" Because thinking "data" and "assimilated" together sounds funny.
* Because thinking "assimilated" gives immature nerds the giggles
* Because what's one more backup system in your collection, er, I mean, assimilation
* Because
* Other [fill in the blank]
* Because CowboyNeal wants his soul backed up and assimilated
Re: (Score:1)
My data... (Score:1)
Not for me, thank you. Plain old rsync works great.
Re: (Score:2)
Re: (Score:2)
Surely you can buy Björn Borg socks and underwear in countries other than Sweden...?
Rsync is not for backup (Score:2)
Sorry, rsync has a number of issues that make it unacceptable (IMO) for backup. Among other reasons, it doesn't preserve metadata, and the rsync people think that's the correct behavior,
Don't get me wrong: I use rsync all the time, but never for backup.
Re: (Score:1)
It definitely has its place... (Score:5, Insightful)
There is a definite place for Borgbackup, attic, bup, obnam, zbackup and other deduplicating backup utilities. The ability to just toss data whenever you feel like it, and only deltas get saved (after being compressed) is a nice thing. Same with having decent encryption.
I personally have been using zbackup for a while, which is quite usable for backups, especially via SSH, where it can SSH into my NAS, fetch data, and only store what is changed to some media I rotate out for safekeeping. Zbackup has not had much Git activity, but Borgbackup has had an extreme amount of work done with it, so it is definitely a utility to watch and consider using.
Re: (Score:2)
Ive been using BackupPC with compression and deduplication for well over 10 years now. Current pools stats show 35TB of backups compress down to 4TB in my pool.
Traditional backup could become irrelevant (Score:2)
Since data capacity has outpaced data rate by many orders of magnitude, anyone trying to maintain terabytes of data can find himself in an awkward situation where the time to create a backup exceeds a desired backup interval. Real-time mirroring or other fault tolerance scheme might become the only reasonable solution to data assurance. If very large numbers of files are involved and an ongoing change log isn't maintained by the file system, then even incremental or differential backups become a time-cons
Re: (Score:1)
They're not reasonable solutions. An accidental "rm -rf
Re: (Score:2)
Then put in safeguards against careless foolhardy use. What part of the chasm between capacity and rate didn't you grasp? Don't count on solid state storage to be the messiah, either: we now have a 15TB capacity SSD that again provides a disproportionately smaller increment in rate. Rate will never catch up to capacity.
Re: (Score:2)
And what's wrong with that? Because if you wanted to undo that, you just use the backup of the filesystem as it was before you executed the fateful command.
Are we stuck in the thinking that we can only have one complete copy of something? Must suck for daily backups where
Re: (Score:2)
He was in the "drive mirror is the best backup" camp, I'm firmly in the "sync and unplug a USB HD". his version of windows deep-6'ed itself, his drives were past recovery with common tools, and had to start over. lost all his business e-mail, and a lot of important documents, thankfully, he
Re: (Score:2)
Re: (Score:2)
Wrong. A COW mirror with automatic snapshots protects against many other scenarios, and most (but not all) hardware failures. A COW mirror with frequent scheduled incremental snapshot replication to a remote location protects against just about everything, with no USB drives involved.
Unfortunately, COW mirrors won't win any write performance benchmarks against XFS, as the internal write path tends to be far more complex. But seriously, use the Volvo for 90%
Re: (Score:1)
Re: (Score:2)
Re: Traditional backup could become irrelevant (Score:2)
The only solution for that is actual block level backups. This requires file system support to be able to snapshot and/or indicate blocks that have changed. It still takes a week or so to take the initial backup but after that, you could take a snapshot every 5m and replicate just the changes across multiple volumes. If your ingress of data exceeds your capacity of egress to a backup however, then you need to rethink your architecture.
I have a system that can take hours just to transverse and read the metad
Re: (Score:2)
I suggest that you get a better filesystem them. IBM's SpectrumScale (aka GPFS) while I admit costs money would take minutes to transverse the metadata of even hundreds of millions of files.
Re: (Score:2)
Transversing metadata is not my primary concern. It only happens once every few months when someone forgot where they put their files and it's usually deductible in other ways. There are better ways to spend my money than expensive software though, I think a license for this alone would cost as much as the storage array itself, if I wanted to spend that much money, I'd just invest in all-flash storage.
On my 5th backup system (Score:2)
A few weeks ago I cut the cord and migrated away from Windows to Linux (mint). Was using SyncBack to backup my files, now I need to find a new solution.
I'm on my 5th package, because the first 4 were screwy in various ways. The default backup tool doesn't save profiles, so you have to type in the source and destination every goddamn time. (But when you do, it *does* work.)
"BackInTime" apparently allows multiple profiles, so I created a profile and hit "close" and got the error "default profile source direct
Re: (Score:1)
Take a close look at 'rsync' and then write a script that uses it to do what you want.
It even allows you to do versioned backups once you understand how to use the '--link-dest=' option properly.
Re: On my 5th backup system (Score:2)
Re: (Score:2)
Try FreeFileSync at http://www.freefilesync.org/ [freefilesync.org] . It has Linux versions, but I've used it only on Windows so far, and not as a regular backup tool yet. Its interface also takes a little getting used to at first.
As it happens, that's my next choice. Already installed, will try it this evening.
duplicity comparison anyone? (Score:3)
Client/server and tape autoloaders? (Score:3)
This might be fine where you've got a single Linux machine and, say, backup to an external USB3 hard drive, but what about bigger setups than this? For example, multiple Windows/Linux client machines to backup and a central server with an autoloader/barcoded Ultrium tape drive attached? There's very few open source solutions that deal with this in a heterogeneous environment (Amanda - which is poor with Windows clients - and Bacula - which is ridiculously complex to setup - are just about the only two that spring to mind). Until BorgBackup can do something similar, it's not really useful in a multi-machine/autoloader setup (no, I don't want to install two backup systems on every client...).
Borg backup kicks butt (Score:2)
I have been using Borg backup for a few months. I absolutely love it. Before borg, I had a nightmare backup scheme. I have a lot of data. And I cannot backup all of it every week. It would require too much storage. I got a little taste of deduplicated backup with the backup tool Microsoft includes in Windows Server 2012. I was immediately hooked. But it has severe limitations. I wanted a very flexible backup program that did deduplication well. In my opinion, there is nothing else that even comes
Comment (Score:1)
BorgBackup 1.0.0, so is that Locutus? When will version 7.9 be released?