Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Apple

Apple Introduces New File System AFPS With Tons Of 'Solid' Features (apple.com) 295

On the sidelines of its Worldwide Developer's Conference, Apple also quietly unveiled a new file system dubbed APFS (Apple File System). Here's how the company describes it: HFS+ and its predecessor HFS are more than 30 years old. These file systems were developed in an era of floppy disks and spinning hard drives, where file sizes were calculated in kilobytes or megabytes. Today, solid-state drives store millions of files, accounting for gigabytes or terabytes of data. There is now also a greater importance placed on keeping sensitive information secure and safe from prying eyes. A new file system is needed to meet the current needs of Apple products, and support new technologies for decades to come.Ars Technica dived into the documentation to find that APFS comes with a range of "solid" features including support for 64-bit inode numbering, and improved granularity of object time-stamping. "APFS supports nanosecond time stamp granularity rather than the 1-second time stamp granularity in HFS+." It also supports copy-on-write metadata scheme which aims to ensure that file system commits and writes to the file system journal stay in sync even if "something happens during the write -- like if the system loses power." The new file system offers an improvement over Apple's previous full-disk encryption File Vault application. It also features Snapshots (that lets you throw off a read-only instant of a file system at any given point in time), and Clones. According to the documentation, APFS can create file or directory clones -- and like a proper next-generation file system, it does so instantly, rather than having to wait for data to be copied. From the report: Also interesting is the concept of "space sharing," where multiple volumes can be created out of the same chunk of underlying physical space. This sounds on first glance a lot like enterprise-style thin provisioning, where you can do things like create four 1TB volumes on a single 1TB disk, and each volume grows as space is added to it. You can add physical storage to keep up with the volume's growth without having to resize the logical volume.As the documentation notes, things are in early stage, so it might take a while before AFPS becomes available to general users.
This discussion has been archived. No new comments can be posted.

Apple Introduces New File System AFPS With Tons Of 'Solid' Features

Comments Filter:
  • by JoeyRox ( 2711699 ) on Tuesday June 14, 2016 @09:02AM (#52314605)
    This new filesystem should become stable in about 2028.
  • Compression (Score:4, Insightful)

    by paulhar ( 652995 ) on Tuesday June 14, 2016 @09:07AM (#52314621)

    C'mon, it's 2016. Where is compression?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      because it's 2016 and disk compression isn't necessary for everyday use. You have inordinately cheap disk, and performance far outweighs the need for compression. Sure, you could find lots of value in compression.... and you can get it with file compression utilities. Any compression algorithm that would give anything better than "average" couldn't be stream oriented and would therefore likely kill performance.
      Yes, it could be done. But is it needed? Nope.

      • by paulhar ( 652995 )

        I disagree, as ZFS demonstrates >500MB/sec to a compressed filesystem is very achievable, including random I/O access.

        FIle compression utilities don't work well with virtual machines. You can't just start the VM in a zip file...

        • No, but block-level deduplication works wonders with VMs, especially when you have multiple VMs that are all based on the same core OS image...

        • Re:Compression (Score:5, Insightful)

          by ArchieBunker ( 132337 ) on Tuesday June 14, 2016 @11:28AM (#52315875)

          What kinds of files are people generating today? Pictures and video. What kinds of files are already compressed to begin with? Pictures and video. Compression doesn't make sense unless you have massive amounts of text or database files.

      • You have inordinately cheap disk

        Because of Apple's tendency to solder the SSD to the mainboard in the Mac Pro and all current MacBook laptops other than the non-Retina MBP, an upgrade requires replacing the whole computer at a substantial cost. Only external storage is "inordinately cheap" on a Mac, and not all laptop use cases make external spinning rust practical.

        Sure, you could find lots of value in compression.... and you can get it with file compression utilities.

        That's fine, so long as these utilities can let the user mount an archive read-only as a folder and thereby let other applications see the archive's contents as files in as a f

    • by mlts ( 1038732 )

      I wouldn't mind deduplication either.

      • by swb ( 14022 )

        Dedupe is more valuable than compression because you can usually find duplication even among unrelated compressed data. I have dedupe enabled on a volume with DVD ISOs and see ~20% compression.

        We had a laugh at work believing that the dedupe was due to plot overlap in the movies.

      • Deduplication is hugely expensive in memory (have to keep a hash of every file on the drive in memory, since dedupe searches through hashes on disk are painfully slow) and CPU time (have to compare the hash of each new file written to every other file's hash). I played with it on my FreeNAS box, and the memory bloat and performance hit (80-100 MB/s writes turned into 5-25 MB/s writes) was completely unacceptable.

        Deduplication makes sense when you've got multiple copies of large amounts of data (e.g. a f
    • They probably evaluated that SSD are large enough already, large files on Macs are usually compressed on application level (video), and adding another level of compression would only needlessly drain batteries.
    • by OzPeter ( 195038 )

      C'mon, it's 2016. Where is compression?

      Not there, because Apple is not going to copy Double Space from MS-DOS

    • Compression? why - to compress all of those mp3 and mp4 video files? Or your TXT docs?

      I was thinking that a current issue is the crypto-ransom stuff and that a FS needs to version on-demand. Sure everyone is *supposed* to have backups. I don't know what the Mac world is like but most PC folks I know do a file-copy to a USB drive (if they do anything at all). I'm not talking about what smart IT folks do - referring instead to general users.

      How many people have a time machine? (and is that good enough?

      • by mlts ( 1038732 )

        With ransomware on the rise, having a filesystem that can take snapshots, perhaps coupled with a version of Time Machine that works on snapshots will help provide some mitigation. If the ransomware doesn't have root, it can't purge snapshots, although it can do mayhem in other places.

        I would say Time Machine is OK for an "oh shit" backup for bare metal restores, but I wouldn't really rely on it as my sole way to retrieve data, because I've had instances where TM backups got hopelessly corrupted. I would p

      • I back up my MacBook Pro to a FreeBSD box using ZFS with compression and deduplication (and snapshot it periodically, because if TimeMachine detects that your backups are corrupted then the only option is to delete them an redo from start, and it's nice to be able to revert just one backup if the last backup broke something). With lz4 compression, the compression ratio for the ZFS filesystem that I use as a backup target is 2.08x - that's a fairly hefty saving. It's harder to measure how much dedup is sav

    • Comment removed based on user account deletion
      • by jedidiah ( 1196 )

        Putting it in the file system is standardized and transparent. Doing things at a "per application" level runs the risk of making your solution completely incompatible with anything else on your own platform.

    • Re: (Score:3, Informative)

      by macs4all ( 973270 )

      C'mon, it's 2016. Where is compression?

      Well, it has been part of HFS+ since Snow Leopard [arstechnica.com] (2005). Where have you been?

      So, I would imagine that the new FS will support it as well.

      • by paulhar ( 652995 )

        I've been using a Mac... including being an early adopter of Clusters.

        HFS+ compression isn't designed for user files, which is why there are no native tools to use it *for end users*.
        There are some hacky command line things you can do, but it's messy, can break, and is totally useless for anything that modifies the file (so, VMs, databases, and the like).

        If you're going to use that, you may as well just zip the file and unzip it before you use it.

        • I've been using a Mac... including being an early adopter of Clusters.

          HFS+ compression isn't designed for user files, which is why there are no native tools to use it *for end users*. There are some hacky command line things you can do, but it's messy, can break, and is totally useless for anything that modifies the file (so, VMs, databases, and the like).

          If you're going to use that, you may as well just zip the file and unzip it before you use it.

          Thanks! I wondered what happened to HFS+ Compression. I remember hearing about it in a WWDC Keynote, and then just forgot it existed.

          Guess I now know why...

  • Good Luck (Score:4, Insightful)

    by bill_mcgonigle ( 4333 ) * on Tuesday June 14, 2016 @09:08AM (#52314629) Homepage Journal

    It's a hard job. We're into year fifteen of ZFS and it's just starting to gain some features that make administration of it manageable by non-experts. Give it another five before you want to make it your default on a desktop for grandma. BTRFS will be along five years after that.

    If Apple can pull off something similar in a couple years, it will be a major triumph. It's too bad for everybody that Steve got bitchy at Jonathan and the community hasn't had Apple's help as a contributor for the past decade.

    • Comment removed based on user account deletion
    • It's too bad for everybody that Steve got bitchy at Jonathan and the community hasn't had Apple's help as a contributor for the past decade.

      I'm pretty sure it was ZFS' assimilation by Oracle that put the brakes on that deal.

    • by Britz ( 170620 )

      Rumors say the only reason OSX didn't go ZFS was because Jonathan Schwartz spilled the beans. Either Steve Jobs gets to make the announcement about the 'next big thing' in his big Apple presentation or there is no 'next big thing'.

      Here is the Slashdot story from 2007:

      https://apple.slashdot.org/sto... [slashdot.org]

      • by geek ( 5680 )

        They didn't go with ZFS because it was case-sensitive and at the time that was a major problem, not just for the OS but all of the apps written for it. They had just migrated to intel from PPC and didn't want to impose that on developers yet. There was also some murky licensing issues they didn't want to gamble with.

        Now APFS has the same case-sensitive problem (for them) but they finally realize HFS+ just can't scale and continue as it is. Apple has finally realized their stubbornness is holding them back.

        W

        • You can actually set ZFS to be case-insensitive, on a per-dataset basis, with the casesensitivity=sensitive|insensitive option. Support for that was added in 2007, so I guess it was pretty new at the time.

  • by mlts ( 1038732 ) on Tuesday June 14, 2016 @09:11AM (#52314637)

    I'm glad Apple has introduced this. As of now, the snapshot API and others are not present, but now Apple is on parity with everyone else in the industry.

    APFS isn't like ZFS or btrfs, but more like ReFS in the fact that it still requires a logical volume manager. It would be nice if it had RAID, but that is a minor item, compared to just getting rid of HFS+, which just had to be killed.

    Some features I like:

    The ability to encrypt volumes with multiple volume keys. It looks like it will be similar to Oracle's ZFS on Solaris, but the implementation can be completely different.

    Snapshots. Something like zfs send and zfs send -i will be quite useful for backups.
    Copy-on-write capability, which is useful for VMs.

    Of course, it appears that Apple will be documenting and publishing the FS's specs in 2017, which will be even more useful for compatibility.

    All and all, even though there is no RAID 5/RAID-Z, or LVM replacement, this is a heck of a lot better than what OS X/macOS has now.

    • Aye. Whereas almost all of Microsoft's filesystem advances are hidden in shadow-copies and inaccessible system folders. Or enterprise-version only RAID-like features. We can't even freaking tag files n folders unless they are "media" files.
    • I don't understand why innovations like those found in BeFS (like rich metadata support) go ignored whenever someone creates a new file system. If you are going to break compatibility, you might as well add in some useful features.
  • by cyber-vandal ( 148830 ) on Tuesday June 14, 2016 @09:39AM (#52314861) Homepage

    How about letting users unplug removable media without having to eject it first like every other OS has had for about a decade.

    • by iCEBaLM ( 34905 )

      Don't know what you're on about, I still have to do that shit on Windows 10 and Linux....

    • by Anonymous Coward on Tuesday June 14, 2016 @11:29AM (#52315887)

      Having occasionally yanked out removable media on OS X without properly ejecting it, you can do so now. But you run the same risks as every other OS and commonly-used filesystem: that things will be corrupted in the process and have to be fixed the next time you insert the drive.

      What are these "other OS" you speak of? Windows? No. It will happily corrupt files depending upon what you are doing with the drive in question at the time you yank it out. Likewise Linux and most of its filesystems. Modern journaled filesystems are likely to be able to put things back into some semblance of order in the aftermath, but if you think it is routine to be able to do this without special setup you are mistaken.

      The only thing I've noticed is that Windows will complain less frequently when you yank out a device, whereas OS X will reliably and correctly warn you that doing so is dangerous and not recommended unless you eject it in software first. In fact, OS X is better at informing you which program has files open on the device when you attempt to eject it, whereas Windows will just vaguely tell you that something is still holding up the process. Oh, and Windows "helpfully" disables write caching to slow down your pluggable devices in an attempt to diminish the likelihood you'll corrupt something. Whether you consider that truly helpful or not is debatable. It's a significant tradeoff.

    • How about letting people drive away from the gas station without first having to remove the pump nozzle from their car?

      The eject command forces the filesystem to flush any read/write buffers. It completes only when anything that's being written to the removable media or read from it has finished. So if you remove the media without first ejecting it, there's a risk that some data never finished writing and you have a corrupt file(s) on the media instead of the files you think you had, or something else
      • The parent is right.

        But not only that. The flash controller could be running a background process, such as offline deduplication or data block movement for static wear levelling. These processes are *not* triggered by reads or writes from the OS, so even when you are not actively writing to the disk, simply removing it without ejecting *might* cause data corruption and data loss.

  • If I didn't know any better, it sounds like Apple might be gearing up to offer some sort of in-house virtualization, with this new filesystem laying the foundation for it.
  • .. case sensitive filenames by default? :D

    Just wondering. I know HFS+ can have case-sensitivity, but not sure if it is on by default. And some people seem to be discouraging that, based on quick googling.

    • Case sensitive file systems are great! Change those lower-case "L"s to upper case "i"s and watch the hilarity ensue!
    • Case sensitivity in OS X works just fine now, if you don't install any third-party software... And most software just works, especially if it's a Mac First program... But a lot of stuff that's developed cross-platform has weird inconsistent file referencing that "works just fine" in Windows and case-insensitive HFS+ but breaks once you start caring about case.
  • HFS may be thirty years old but we still have major headaches transferring files between Macs and other machines. I truly believe that Apple would be better served if they invested in a open filesystem format.

  • Really, terabyte SSDs? Today's SSDs, in terms of storage capabilities are more like mechanical drives of 20 years ago. Yes, data centers may have large SSDs, but not users. Will average users benefit from this new file system or will things like 64bit pointers on a drive less than a gigabyte simply consume more of the drive for little benefit?

    Finally, are not there already file systems available that meet whatever this new need of Apple's is that would not require the recreation of the wheel (or disk)? If

  • "APFS supports nanosecond time stamp granularity rather than the 1-second time stamp granularity in HFS+.

    Damn, 1-nansecond time stamp granularity? A factor of one billion improvement in resolution, that's fairly impressive. I'm not sure it'll be of much use to a lot of people, but I'm all for greater precision/resolution in stuff like this.

I cannot conceive that anybody will require multiplications at the rate of 40,000 or even 4,000 per hour ... -- F. H. Wales (1936)

Working...