Apple Introduces New File System AFPS With Tons Of 'Solid' Features (apple.com) 295
On the sidelines of its Worldwide Developer's Conference, Apple also quietly unveiled a new file system dubbed APFS (Apple File System). Here's how the company describes it: HFS+ and its predecessor HFS are more than 30 years old. These file systems were developed in an era of floppy disks and spinning hard drives, where file sizes were calculated in kilobytes or megabytes. Today, solid-state drives store millions of files, accounting for gigabytes or terabytes of data. There is now also a greater importance placed on keeping sensitive information secure and safe from prying eyes. A new file system is needed to meet the current needs of Apple products, and support new technologies for decades to come.Ars Technica dived into the documentation to find that APFS comes with a range of "solid" features including support for 64-bit inode numbering, and improved granularity of object time-stamping. "APFS supports nanosecond time stamp granularity rather than the 1-second time stamp granularity in HFS+." It also supports copy-on-write metadata scheme which aims to ensure that file system commits and writes to the file system journal stay in sync even if "something happens during the write -- like if the system loses power." The new file system offers an improvement over Apple's previous full-disk encryption File Vault application. It also features Snapshots (that lets you throw off a read-only instant of a file system at any given point in time), and Clones. According to the documentation, APFS can create file or directory clones -- and like a proper next-generation file system, it does so instantly, rather than having to wait for data to be copied. From the report: Also interesting is the concept of "space sharing," where multiple volumes can be created out of the same chunk of underlying physical space. This sounds on first glance a lot like enterprise-style thin provisioning, where you can do things like create four 1TB volumes on a single 1TB disk, and each volume grows as space is added to it. You can add physical storage to keep up with the volume's growth without having to resize the logical volume.As the documentation notes, things are in early stage, so it might take a while before AFPS becomes available to general users.
If Swift is any guide... (Score:5, Funny)
Re: Swift is stable. (Score:3)
Re: Swift is stable. (Score:4, Insightful)
What a ridiculous argument.
By that logic, backwards compatibility is never an issue. Why even try to offer it? You can just keep using the old version! Compatibility solved!
Re:Swift is stable. (Score:5, Funny)
Swift 2 is stable enough that I get occasional calls from recruiters asking for five years of it as a language for dev jobs. So, if it is good enough to transcend time/space, it should be stable enough.
Re: (Score:3)
It's HR boilerplate. They know nothing about the technology, but they know that for entry level they want degree plus course work in $X, for junior they want 5 years $X, and for senior they want 10 years $X and $Y. I've read a job description for our group once and showed it to my manager asking what position it was for, and he replied "wait, that's not what I wrote!"
Re: (Score:2)
This new filesystem should become stable in about 2028.
That wont stop the fanboys. I'm just waiting for the first one to lose all his data because of this and try and spin it as a good thing!
Well, everyone on Slashdot calls me a "fanboi"; but I'm here to tell you that neither I, nor any of my other Mac-using friends rush right out and update to ANYTHING even approaching a major release, no matter how tempting it may be to try out some new feature. I have taught them well, to sit back and wait for the idiots to do the "field testing".
Re: (Score:2)
This is just a guess, but the reason most people call you a "fanboi" is because of 2 things:
1. Your username screams "fanboi"
2. Every single one of your posts promotes macs and apple
Like I said, just a guess, but, it just could be your fault.
Re:If Swift is any guide... (Score:4, Funny)
His first choice for username was "TimCooksTastyBottom" but Apple's board of directors didn't think that was an appropriate Slashdot username for their CEO, so they settled on "macs4all"..
Re:If Swift is any guide... (Score:4, Informative)
This is just a guess, but the reason most people call you a "fanboi" is because of 2 things:
1. Your username screams "fanboi" 2. Every single one of your posts promotes macs and apple
Like I said, just a guess, but, it just could be your fault.
Anyone who thinks that the gentle wish connoted by my Username is cause for the amount of ad hominem abuse I have received is sadly lacking in online etiquette.
I actually make many posts on Slashdot that have nothing to do with Apple. Depends on the Thread.
Re:If Swift is any guide... (Score:5, Insightful)
Gentle wish? Fuck your gentle wish. You're happy using unjustly overpriced SHIT because you're a dumbass. The rest of us who have more knowledge and experience then you'll ever have in your whole life, know better.
Been designing computer hardware and software since 1976.
Fluent in dozens (literally) of Assembly-languages from 6502 to ARM7 TDMI, plus C, PHP, HTML and several BASIC variants. Never did like C++ or Java, though...
Paid Embedded Developer (hardware and software) for nearly 40 years, with a specialty in R&D of industrial Real-Time measurement and control PRODUCTS.
Currently Develop Windows ERP Applications.
Certified MS SQL Server Admin.
The list goes on...
Yeah. I'm a dumbass alright.
STFU.
I like Apple equipment and OSes precisely BECAUSE I got all that "Work ON my computer" shit out of my system 30 frickin' YEARS ago.
Apple stuff isn't overpriced; because my time (and frustration) is actually WORTH something.
Re: (Score:2)
Well, everyone on Slashdot calls me a "fanboi";/quote>
Really, macs4all, why in the world would that be? I can't imagine why...
Re: (Score:2)
Couple of years! More like a decade at minimum if you ask me.
Re: (Score:2)
Couple of years! More like a decade at minimum if you ask me.
Yep, that sounds more realistic for real-world use. At least 4 or 5 years, but yeah, it'll need some serious real-world testing before any claim of "stable" will be credible.
Re: (Score:3)
Re:If Swift is any guide... (Score:4, Informative)
The question becomes is how long has this new fs been under development at Apple? Apple is well known for designing and testing software in the background and only announce it after it has reached a staple point. So it could be 3-5 years in development already.
I can see it rolling out to Mac users with the next is and bieng default in just 2-3 years.
Re:If Swift is any guide... (Score:5, Interesting)
I suspect it has been in progress since Apple's decision to pull ZFS, which offered many of these features... and possibly longer. That's way more than 5 years; in fact, next year (the expected release year), it will have been the ten years that the GP says makes a filesystem trustworthy.
Compression (Score:4, Insightful)
C'mon, it's 2016. Where is compression?
Re: (Score:2, Informative)
because it's 2016 and disk compression isn't necessary for everyday use. You have inordinately cheap disk, and performance far outweighs the need for compression. Sure, you could find lots of value in compression.... and you can get it with file compression utilities. Any compression algorithm that would give anything better than "average" couldn't be stream oriented and would therefore likely kill performance.
Yes, it could be done. But is it needed? Nope.
Re: (Score:3)
I disagree, as ZFS demonstrates >500MB/sec to a compressed filesystem is very achievable, including random I/O access.
FIle compression utilities don't work well with virtual machines. You can't just start the VM in a zip file...
Re: (Score:2)
No, but block-level deduplication works wonders with VMs, especially when you have multiple VMs that are all based on the same core OS image...
Re:Compression (Score:5, Insightful)
What kinds of files are people generating today? Pictures and video. What kinds of files are already compressed to begin with? Pictures and video. Compression doesn't make sense unless you have massive amounts of text or database files.
Transparent decompression through OSXFUSE (Score:2)
You have inordinately cheap disk
Because of Apple's tendency to solder the SSD to the mainboard in the Mac Pro and all current MacBook laptops other than the non-Retina MBP, an upgrade requires replacing the whole computer at a substantial cost. Only external storage is "inordinately cheap" on a Mac, and not all laptop use cases make external spinning rust practical.
Sure, you could find lots of value in compression.... and you can get it with file compression utilities.
That's fine, so long as these utilities can let the user mount an archive read-only as a folder and thereby let other applications see the archive's contents as files in as a f
CORRECTION (Score:2)
I was mistaken. I apologize. Thank you for clarifying. So let me correct myself:
In order to understand Apple's intent in leaving transparent compression out of a file system, we'll have to watch for what Apple chooses to solder down in its next round of hardware.
Re: (Score:2)
That's common wisdom, but I don't think it stands up in the modern world. Here's my bonnie results on a softraid-5 partition with a 10GB test file:
simon% bonnie -s 10000 -m imac
File './Bonnie.2543', size: 10485760000
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 3...Seeker 2...Seeker 1...start 'em...done...done...
Re: (Score:2)
I said lz4, not gzip. The difference is massive.
Re: (Score:2)
Yep, you're right - I misread your comment. The LZ4 variant does indeed do it in time:
simon%
Compressed 555745280 bytes into 555745827 bytes ==> 100.00%
1.05 real 0.26 user 0.69 sys
Re: (Score:2)
And don't forget that's single-threaded, whereas a filesystem (which will split files up into blocks) can use all available cores for compression.
Re: (Score:2)
Gzip is completely the wrong algorithm. Modern processors can handle lz4 compression faster than a single SSD can write (around 400MB/s) and decompression a lot faster than one can read (1-2GB/s). lz4 also has quick-fail, so will skip compression when entropy is low (e.g. files that are already compressed or random data), so in your example would add so little latency that it would be hard to measure.
Your experiment is also putting the compression in completely the wrong part of the pipeline. When rea
Re: (Score:2)
> That's common wisdom, but I don't think it stands up in the modern world. Here's my bonnie results on a softraid-5 partition with a 10GB test file:
Below is proof of that RLaager is accurate.
Re: (Score:2)
Random reads and writes work fine, assuming it's sanely implemented. ZFS only compresses per-block (normally up to 128KiB). Reading and writing those blocks doesn't depends on other blocks in the same file. It's not even difficult to have a file on ZFS whose blocks have different types of compression applied to it. Works fine.
Re: Compression (Score:4, Insightful)
Regarding the "performance penalty", it's generally going to be positive and improve performance. We are talking lz4 here, not gzip/bzip2/xz. It's fast, trading a lower compression ratio for performance. It can compress and decompress blocks in parallel. It can do this much faster than it can read data from disk, so you'll actually improve read and write speeds. And this is on top of ZFS being able to pull data of multiple spindles as the data is distributed over multiple zvols, with redundant copies of data, etc. It's likely not the penalty you think it is. It does multiple rounds of lightweight lz4 compression to reduce the entropy, and it bails out early if poorly compressible.
% zfs get refcompressratio red/home/rleigh system/usr/ports system/usr/src system/var/log system/var/mail
NAME PROPERTY VALUE SOURCE
red/home/rleigh refcompressratio 1.33x -
system/usr/ports refcompressratio 1.60x -
system/usr/src refcompressratio 2.16x -
system/var/log refcompressratio 6.54x -
system/var/mail refcompressratio 4.99x -
With compression like this, you no longer need to bother compressing rotated logs. And while the homedir compression is small in comparison, it's gained me an extra 100GiB just for this single dataset, which is not to be sneezed at.
Re: (Score:3)
I wouldn't mind deduplication either.
Re: (Score:2)
Dedupe is more valuable than compression because you can usually find duplication even among unrelated compressed data. I have dedupe enabled on a volume with DVD ISOs and see ~20% compression.
We had a laugh at work believing that the dedupe was due to plot overlap in the movies.
Re: (Score:2)
Deduplication makes sense when you've got multiple copies of large amounts of data (e.g. a f
Re: (Score:2)
Re: (Score:2)
C'mon, it's 2016. Where is compression?
Not there, because Apple is not going to copy Double Space from MS-DOS
TimesTwo and DiskDoubler (Score:2)
Long ago, there was a Stacker-like tool for classic Mac OS called TimesTwo that installed itself as a SCSI driver. There were also file-level tools called DiskDoubler, Now Compress, and StuffIt SpaceSaver that intercepted file open calls and decompressed files in the background. Files were written uncompressed, to be compressed later by the "AutoDoubler" background task.
Re:2016? crypto-ransom protection !! (Score:2)
Compression? why - to compress all of those mp3 and mp4 video files? Or your TXT docs?
I was thinking that a current issue is the crypto-ransom stuff and that a FS needs to version on-demand. Sure everyone is *supposed* to have backups. I don't know what the Mac world is like but most PC folks I know do a file-copy to a USB drive (if they do anything at all). I'm not talking about what smart IT folks do - referring instead to general users.
How many people have a time machine? (and is that good enough?
Re: (Score:2)
With ransomware on the rise, having a filesystem that can take snapshots, perhaps coupled with a version of Time Machine that works on snapshots will help provide some mitigation. If the ransomware doesn't have root, it can't purge snapshots, although it can do mayhem in other places.
I would say Time Machine is OK for an "oh shit" backup for bare metal restores, but I wouldn't really rely on it as my sole way to retrieve data, because I've had instances where TM backups got hopelessly corrupted. I would p
Re: (Score:2)
I back up my MacBook Pro to a FreeBSD box using ZFS with compression and deduplication (and snapshot it periodically, because if TimeMachine detects that your backups are corrupted then the only option is to delete them an redo from start, and it's nice to be able to revert just one backup if the last backup broke something). With lz4 compression, the compression ratio for the ZFS filesystem that I use as a backup target is 2.08x - that's a fairly hefty saving. It's harder to measure how much dedup is sav
Re: (Score:2)
Re: (Score:2)
Putting it in the file system is standardized and transparent. Doing things at a "per application" level runs the risk of making your solution completely incompatible with anything else on your own platform.
Re: (Score:3, Informative)
C'mon, it's 2016. Where is compression?
Well, it has been part of HFS+ since Snow Leopard [arstechnica.com] (2005). Where have you been?
So, I would imagine that the new FS will support it as well.
Re: (Score:2)
I've been using a Mac... including being an early adopter of Clusters.
HFS+ compression isn't designed for user files, which is why there are no native tools to use it *for end users*.
There are some hacky command line things you can do, but it's messy, can break, and is totally useless for anything that modifies the file (so, VMs, databases, and the like).
If you're going to use that, you may as well just zip the file and unzip it before you use it.
Re: (Score:2)
I've been using a Mac... including being an early adopter of Clusters.
HFS+ compression isn't designed for user files, which is why there are no native tools to use it *for end users*. There are some hacky command line things you can do, but it's messy, can break, and is totally useless for anything that modifies the file (so, VMs, databases, and the like).
If you're going to use that, you may as well just zip the file and unzip it before you use it.
Thanks! I wondered what happened to HFS+ Compression. I remember hearing about it in a WWDC Keynote, and then just forgot it existed.
Guess I now know why...
Re: (Score:2)
Likely it is some rebranded implementation of ZFS
Re: (Score:2)
a) Implication that it's complex to add compression.
Other common filesystems have it, NTFS, ZFS, BTRFS. Given the amount of money Apple have, I doubt it's that complex.
b) Disks are cheap.
Looking at apple.com/uk pricing, the difference between the 512GB SSD and 1TB SSD is £400. And if you choose incorrectly, you can't just open the case up and change it.
You could then use an external USB, as I do, but sleep/wake doesn't work properly, and endurance on USB keys isn't exactly ideal.
Or you could hang a wh
Good Luck (Score:4, Insightful)
It's a hard job. We're into year fifteen of ZFS and it's just starting to gain some features that make administration of it manageable by non-experts. Give it another five before you want to make it your default on a desktop for grandma. BTRFS will be along five years after that.
If Apple can pull off something similar in a couple years, it will be a major triumph. It's too bad for everybody that Steve got bitchy at Jonathan and the community hasn't had Apple's help as a contributor for the past decade.
Re: (Score:2)
Re: (Score:2)
It's too bad for everybody that Steve got bitchy at Jonathan and the community hasn't had Apple's help as a contributor for the past decade.
I'm pretty sure it was ZFS' assimilation by Oracle that put the brakes on that deal.
Re: (Score:2)
Rumors say the only reason OSX didn't go ZFS was because Jonathan Schwartz spilled the beans. Either Steve Jobs gets to make the announcement about the 'next big thing' in his big Apple presentation or there is no 'next big thing'.
Here is the Slashdot story from 2007:
https://apple.slashdot.org/sto... [slashdot.org]
Re: (Score:2)
They didn't go with ZFS because it was case-sensitive and at the time that was a major problem, not just for the OS but all of the apps written for it. They had just migrated to intel from PPC and didn't want to impose that on developers yet. There was also some murky licensing issues they didn't want to gamble with.
Now APFS has the same case-sensitive problem (for them) but they finally realize HFS+ just can't scale and continue as it is. Apple has finally realized their stubbornness is holding them back.
W
Re: (Score:2)
You can actually set ZFS to be case-insensitive, on a per-dataset basis, with the casesensitivity=sensitive|insensitive option. Support for that was added in 2007, so I guess it was pretty new at the time.
This has been my biggest gripe about OS X/macOS... (Score:5, Informative)
I'm glad Apple has introduced this. As of now, the snapshot API and others are not present, but now Apple is on parity with everyone else in the industry.
APFS isn't like ZFS or btrfs, but more like ReFS in the fact that it still requires a logical volume manager. It would be nice if it had RAID, but that is a minor item, compared to just getting rid of HFS+, which just had to be killed.
Some features I like:
The ability to encrypt volumes with multiple volume keys. It looks like it will be similar to Oracle's ZFS on Solaris, but the implementation can be completely different.
Snapshots. Something like zfs send and zfs send -i will be quite useful for backups.
Copy-on-write capability, which is useful for VMs.
Of course, it appears that Apple will be documenting and publishing the FS's specs in 2017, which will be even more useful for compatibility.
All and all, even though there is no RAID 5/RAID-Z, or LVM replacement, this is a heck of a lot better than what OS X/macOS has now.
Re: (Score:2)
Re: (Score:2)
I've got a crazy idea (Score:4, Insightful)
How about letting users unplug removable media without having to eject it first like every other OS has had for about a decade.
Re: (Score:2)
Don't know what you're on about, I still have to do that shit on Windows 10 and Linux....
Re:I've got a crazy idea (Score:5, Informative)
Having occasionally yanked out removable media on OS X without properly ejecting it, you can do so now. But you run the same risks as every other OS and commonly-used filesystem: that things will be corrupted in the process and have to be fixed the next time you insert the drive.
What are these "other OS" you speak of? Windows? No. It will happily corrupt files depending upon what you are doing with the drive in question at the time you yank it out. Likewise Linux and most of its filesystems. Modern journaled filesystems are likely to be able to put things back into some semblance of order in the aftermath, but if you think it is routine to be able to do this without special setup you are mistaken.
The only thing I've noticed is that Windows will complain less frequently when you yank out a device, whereas OS X will reliably and correctly warn you that doing so is dangerous and not recommended unless you eject it in software first. In fact, OS X is better at informing you which program has files open on the device when you attempt to eject it, whereas Windows will just vaguely tell you that something is still holding up the process. Oh, and Windows "helpfully" disables write caching to slow down your pluggable devices in an attempt to diminish the likelihood you'll corrupt something. Whether you consider that truly helpful or not is debatable. It's a significant tradeoff.
Re: (Score:2)
The eject command forces the filesystem to flush any read/write buffers. It completes only when anything that's being written to the removable media or read from it has finished. So if you remove the media without first ejecting it, there's a risk that some data never finished writing and you have a corrupt file(s) on the media instead of the files you think you had, or something else
Re: (Score:3)
The parent is right.
But not only that. The flash controller could be running a background process, such as offline deduplication or data block movement for static wear levelling. These processes are *not* triggered by reads or writes from the OS, so even when you are not actively writing to the disk, simply removing it without ejecting *might* cause data corruption and data loss.
Treat disconnection like power loss (Score:2)
In theory, any journaled file system would tolerate surprise disconnection to the same extent it tolerates surprise power loss. The problem is that journaled file systems tend to be either proprietary or copylefted, hindering their wide adoption for removable media across all major desktop operating systems.
Re: (Score:2)
How about letting users unplug removable media without having to eject it first like no other OS has.
There is no OS in existence that allows that.
As a Mac user, I've noticed that Windows 8 and up seem to handle that particular situation just fine.
Re: (Score:3)
How about letting users unplug removable media without having to eject it first like no other OS has.
There is no OS in existence that allows that.
As a Mac user, I've noticed that Windows 8 and up seem to handle that particular situation just fine.
Linux and MacOS (AFAIK) use write caching which makes it a good idea to eject USB drives on these operating systems (caveat: it's been about a years since I last checked this, things may have changed). Windows on the other hand does not "handle USB drive yanking just fine", it just disables the write cache for external which reduces the chances of file system corruption but does not eliminate it 100%. Disabling the write cache also slows performance (according to the Windows device properties menu, policies
Re: (Score:2)
Generally it is always a good idea to eject your drive unless you have a journaled file system on it which should theoretically be able to recover from yanking related corruption.
When I'm on a Windows computer (which fortunately isn't that often), I've always ejected external media just to be safe. Thing is, I was recently on a Windows 10 machine and didn't see that option even available to me.
Re: (Score:2)
Yeah, a journaling file system should protect you from file corruption. But if you yank the drive out in the middle of a write it should roll the file back to the previous version. You can still loose the data that's in flight unless you properly eject your media.
Re: (Score:2)
We used to have a faculty member here who would keep all her work in her email - papers she was writing, proposals she was working on, etc. So we'd get these panicked phone calls to our computing group phone # "I have a proposal due today and can't get to my email!" (no, we weren't staffed to cover the night shift - so she'd then whine to the Chair about how many hours she'd had to wait when she had this big proposal due and we were "obstructing" her)
Fortunately she moved on to bigger and better things seve
Re: (Score:2)
You're wrong:
http://windows.microsoft.com/en-us/windows7/safely-remove-devices-from-your-computer [microsoft.com]
Windows (7 at least) allows it and won't complain that you've unplugged your device before you've ejected it. I hate this so much on my Mac though.
Precursor to virtualization? (Score:2)
Will it finally have .. (Score:2)
.. case sensitive filenames by default? :D
Just wondering. I know HFS+ can have case-sensitivity, but not sure if it is on by default. And some people seem to be discouraging that, based on quick googling.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Great! Another cross-platform headache! (Score:2)
HFS may be thirty years old but we still have major headaches transferring files between Macs and other machines. I truly believe that Apple would be better served if they invested in a open filesystem format.
Terabyte SSDs? (Score:2)
Really, terabyte SSDs? Today's SSDs, in terms of storage capabilities are more like mechanical drives of 20 years ago. Yes, data centers may have large SSDs, but not users. Will average users benefit from this new file system or will things like 64bit pointers on a drive less than a gigabyte simply consume more of the drive for little benefit?
Finally, are not there already file systems available that meet whatever this new need of Apple's is that would not require the recreation of the wheel (or disk)? If
Holy shazbot (Score:2)
"APFS supports nanosecond time stamp granularity rather than the 1-second time stamp granularity in HFS+.
Damn, 1-nansecond time stamp granularity? A factor of one billion improvement in resolution, that's fairly impressive. I'm not sure it'll be of much use to a lot of people, but I'm all for greater precision/resolution in stuff like this.
Re: (Score:2)
I was hoping Apple would license ZFS or even Veritas Volume Manager/Veritas FS from Symantec. Heck, even ReFS from MS. However, with all the cash they have, I am happy they are putting out something. I wouldn't expect it to be a default filesystem until 2017, perhaps 2018, as filesystems are something never to be undertaken lightly, but long term, it is crucial to macOS's usefulness, especially as SSDs get larger, and TRIM support is more critical to performance.
Not Invented Here Syndrome? (Score:5, Informative)
ZFS [wikipedia.org] is under CDDL [wikipedia.org] and would not even need to be "licensed" in the usual sense — it is free for anybody to take. "Too free" [sfconservancy.org] for certain zealots, in fact, which is why it was not part of Linux kernel for a while — until the supposed "license incompatibility" myths [warpmech.com] got debunked.
Even Linux [arstechnica.com] now offers ZFS [zfsonlinux.org] — Apple would've had a much easier time porting it, because MacOS is already FreeBSD-based and the FreeBSD-project had ZFS available "out of the box" [freebsd.org] for several major releases spanning many years.
What did Apple find lacking about ZFS, that would justify creating their own, is, indeed, a mystery. Probably, a case of the Not Invented Here [wikipedia.org] Syndrome. Sad...
Re:Not Invented Here Syndrome? (Score:4, Informative)
Re:Not Invented Here Syndrome? (Score:5, Informative)
They actually had ZFS working for 10.6, but scrapped it because they couldn't come to terms with Sun. The package was on MacOS Forge back in the day, and the lead developer of it left Apple shortly afterward and created his own 3rd party implementation.
This was before ZFS was licensed under CDDL.
Re:Not Invented Here Syndrome? (Score:5, Interesting)
Re: (Score:3)
> Apple has money that's vulnerable to baseless patent infringement lawsuits.
Creating your own separate implementation doesn't stop that.
Re: (Score:2)
It is, literally, for everything. Some of the features only make sense if you have multiple physical drives — devices, that are unlikely to fail at the same time. But compression [pthree.org], deduplication [oracle.com], snapshots [thegeekdiary.com], encryption [oracle.com] — these are all useful on anything.
Bring on OJFS (Score:2)
I was hoping Apple would license ZFS or even Veritas Volume Manager/Veritas FS from Symantec.
I thought Veritas [wikipedia.org] was also called Online Journaled File System (OnlineJFS or OJFS). What else is OJFS [wikipedia.org]?
Re:Bring on OJFS (Score:5, Funny)
I was hoping Apple would license ZFS or even Veritas Volume Manager/Veritas FS from Symantec.
I thought Veritas [wikipedia.org] was also called Online Journaled File System (OnlineJFS or OJFS). What else is OJFS [wikipedia.org]?
OJFS? Why do you computer types insist on naming your filesystems after murders?
Re: (Score:2, Funny)
Well there's ReiserFS. At least there's precedent.
Re: (Score:2)
I was hoping Apple would license ZFS or even Veritas Volume Manager/Veritas FS from Symantec. Heck, even ReFS from MS. However, with all the cash they have, I am happy they are putting out something. I wouldn't expect it to be a default filesystem until 2017, perhaps 2018, as filesystems are something never to be undertaken lightly, but long term, it is crucial to macOS's usefulness, especially as SSDs get larger, and TRIM support is more critical to performance.
I don't know anything about Veritas FS; but I have looked long and hard at ZFS; but the continued major issues with ZFS on macOS, with Finder integration and more, while getting SLOWLY better (and which would no doubt get better with Apple's access and engineering), still signal that ZFS just isn't ready for Prime-Time on OS X, much to my chagrin.
Re: (Score:2)
> but the continued major issues with ZFS on macOS, with Finder integration and more,
You wouldn't happen to have more detailed information by chance please?
> much to my chagrin.
I too lament that fact that ZFS wasn't chosen. I guess this is the typical NIH here by Apple. :-/
Re: (Score:2)
> but the continued major issues with ZFS on macOS, with Finder integration and more,
You wouldn't happen to have more detailed information by chance please?
> much to my chagrin.
I too lament that fact that ZFS wasn't chosen. I guess this is the typical NIH here by Apple. :-/
Got nothin' to do with NIH. Apple uses LOTS of industry standards and Open Source projects that ALL fall under the "Not Invented Here" category.
Dig down into the Forums on the OpenZFSOnOSX Site [openzfsonosx.org] to see what issues people are still having these days with ZFS (OpenZFS) under OS X. Last I looked was early this year.
Re: (Score:2)
ZFS is not recommended for non-ECC RAM (Score:5, Informative)
http://research.cs.wisc.edu/ad... [wisc.edu]
Re:ZFS is not recommended for non-ECC RAM (Score:4, Insightful)
When checksums fail ZFS will assume the problem is on disk and attempt to "repair" the data on disk. This automatic repair is a great feature, when your RAM can be trusted.
Repair by attempting to correct the data from a redundant location, if one exists, and if its checksum passes. The bit flips required to make such a process actually damage your data seems quite convoluted - it'd have to be multiple errors in different locations happening at just the right times - one in the read before the checksum is checked, one in the data to repair it after the checksum has been verified but before it's written back.
"By default, access time updates are enabled in ZFS; therefore, a read-only workload will update the access time of any file accessed. Consequently, when the structure containing the access time (znode) goes inactive (or when there is another workload that updates the znode), ZFS writes the block holding the znode to disk and updates and writes all its parental blocks. Therefore, any corruption to these blocks will become permanent after the flush caused by the access time update"
In-memory filesystem metadata can get damaged and end up in on-disk structures regardless of which one you use, and it's far from the only fs with atime updates. Is ZFS really significantly more vulnerable to this by comparison, or is it just that ZFS won't defend you against it?
My quick skim of the paper suggests the latter. They don't seem to condemn ZFS for being worse, rather, they show it suffers the same sort of problems they find ext2 suffers from in face of memory errors, while demonstrating it's great at picking up errors from the disk/IO controller/etc.
Re: (Score:3)
For you it may be a low risk. For Apple its not. Apple will be shipping millions of machines.
And these machines are already vulnerable just to single bit errors anywhere both in the IO path and in memory.
The repair-of-death you describe involves multiple errors in the memory path occurring in a specific order and in relatively specific places, that are already dangerous to existing filesystems.
The atime update metadata corruption you quote is similarly already a problem with existing filesystems. In fact it's more of a problem for these filesystems because they're overwriting existing metadata, no
Re:NIH? (Score:5, Insightful)
Licensing. Apple did flirt with ZFS, but for some reason, and I would guess it was license issues, they decided not to go that route. Using btrfs would bring GPL/BSD licensing issues. So, Apple either had to license something like ReFS from MS, or roll their own.
Re:NIH? (Score:5, Insightful)
Re: (Score:2)
Licensing. Apple did flirt with ZFS, but for some reason, and I would guess it was license issues, they decided not to go that route. Using btrfs would bring GPL/BSD licensing issues. So, Apple either had to license something like ReFS from MS, or roll their own.
Exactly. And it WAS Licensing in the case of ZFS. They didn't want to have their Filesystem beholden to the likes of Oracle (and I for one, don't blame them a bit!).
Re: (Score:2)
Re: (Score:3)
It means you can guarantee that each file has a unique 64-bit timestamp by simply assigning a sequential nanosecond timestamp if two files get written at once. It also gives an opportunity to work around UNIX's year 2038 problem (2^31 UTC seconds since 1970) and Apple's year 2040 problem (2^32 UTC seconds since 1904), pushing it out to at least 2262 (2^63 UTC nanoseconds since 1970).
Re: (Score:2)
Re: (Score:2)
It will be opensourced in 2017, when it's ready for the world and settled.