Become a fan of Slashdot on Facebook


Forgot your password?
OS X Businesses Operating Systems Data Storage Sun Microsystems Apple

ZFS Shows Up in New Leopard Build 351

Udo Schmitz writes "As a follow-up to rumours from May this year, World of Apple has a screenshot showing Sun's Zettabyte File System in "the most recent Build of Mac OS X 10.5 Leopard". Though I still wonder: If it is not meant to replace HFS+, could there be any other reasons to support ZFS?"
This discussion has been archived. No new comments can be posted.

ZFS Shows Up in New Leopard Build

Comments Filter:
  • Exciting! (Score:4, Interesting)

    by statusbar ( 314703 ) <> on Monday December 18, 2006 @10:06AM (#17285288) Homepage Journal
    Now that Vista is finalized, expect Apple to show more and more of the 'secret' features of leopard!

  • by ShyGuy91284 ( 701108 ) on Monday December 18, 2006 @10:24AM (#17285474)
    I will be soon converting my Linux server to Solaris just for ZFS. Although ZFS may not terribly useful on a normal desktop, on a server, it's very powerful.... The idea of parity data actually being used actively to ensure data isn't corrupted is brilliant imho. So is the idea of on-the-fly recovery (I remember a video of some guy writing 30 megs of junk to a partition using dd, ZFS detecting it, and repairing it). *ends rant since all this can be read up about online*
  • For some reason... (Score:2, Interesting)

    by uohcicds ( 472888 ) on Monday December 18, 2006 @10:24AM (#17285478) Homepage
    ...the words "Time Machine" are jumping up and down in front of my face trying to attract my attention. I can't think why that might be.
  • by TheRaven64 ( 641858 ) on Monday December 18, 2006 @10:25AM (#17285504) Journal
    A few years ago, I sat down and worked out exactly what I thought a filesystem should do, and how I would implement it. At the time no filesystem came close. Then Sun released ZFS. Real documentation on it is hard to find (behind the marketing hype), but when I did track it down I discovered two things:
    1. They had implemented everything I thought they should, and
    2. That only accounted for about 40% of the features of ZFS.
    Calling it the last word in filesystems might be hyperbole, but I expect ZFS to last a good 10-20 years, which is quite respectable for a filesystem, and I wouldn't be surprised if it lasted longer. Is it a replacement for HFS+? Not yet.

    HFS+ is a very nice filesystem for single user systems with a single disk. It implements journalling, has reasonable performance, and has good metadata support. For the average users at the moment, the only real advantage of ZFS would be snapshots, and these are not too difficult to implement for other filesystems.

    ZFS, however, is much better when you have multiple physical disks. At the moment, only the top-end Macs have more than one disk. This is likely to change in two ways:

    1. Cheap flash,
    2. Network storage
    For a home user, ZFS could handle backups trivially by plugging in a large flash drive and adding it to the pool. I suspect this will be one mechanism Time Machine will use. Due to the way ZFS works, you can just mirror a part of the directory tree (e.g. /Users/aUser) onto the external disk. With a big external drive, you could mirror the entire disk onto it and also save snapshots (another Time Machine feature...). The same could be done with network storage. With the current price of hard drives, I wouldn't be surprised if .Mac started offering 10-20GB of storage space for remote backups using this mechanism (take a look at the NFS4 integration in ZFS to see how this could be done).

    ZFS is not needed as a replacement for HFS+ in 2007, but it probably will be in 2008-9. ZFS is a 128-bit filesystem, which means it is designed to last for a long time. We will probably never need a 128-bit filesystem (unless we actually want to build hard drives the size of planets with single-atom sectors), but we will need a 65-bit filesystem once we get to around 10 Exabytes. This won't happen with single drives for a while, but it will with RAID arrays.

  • by FunWithHeadlines ( 644929 ) on Monday December 18, 2006 @10:26AM (#17285522) Homepage
    See this Ars Technica article [] where John Siracusa said back in August:
    "For Mac geeks of a certain persuasion, the first mention of a soon-to-be-revealed feature of Leopard during the WWDC keynote set off a mental chain-reaction. That feature was Time Machine, and the name alone was enough to cause one particular phrase to hammer in the mind of many people, including me: "New file system in Leopard!" It was even a bingo square. In fact, it was my personal favorite bingo square, and the one that I most looked forward to marking.

    But let's back up a bit. Why should the mere name "Time Machine" scream "new file system" to anyone? And why the excitement about a new file system in the first place? What's wrong with HFS+, Mac OS X's current file system? It's got journaling. It supports arbitrarily extensible metadata. It can even be case-sensitive to satisfy the Unix geeks. Does Mac OS X really need a new file system?

    In a word, yes. HFS was a state-of-the-art personal computer file system when it was first released...twenty-one years ago. HFS+ is only eight years old, but it's built on many of the design decisions of HFS. Progress marches on. Today, there are new capabilities that the best modern file systems have, but that HFS+, even with all of its recent additions, does not. Here's a short list.

    • Efficient storage and handling of very small files.
    • Logical volume management through a pooled storage model.
    • Improved data integrity using checksums on all data.
    • Snapshots.

    So it's about the snapshot ability of ZFS, and that's exactly what will be needed for Time Machine [].

  • Re:copy-on-write (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Monday December 18, 2006 @10:29AM (#17285566) Journal
    No. Mmap lives above the filesystem layer. Unless you are doing mmap on the block device, in which case you should realise that not everyone works for oracle...

    Mmap simple maps pages of a disk file into memory. If the disk file changes its physical location then the mapping is updated. When you call mmap, you give it a disk file, an offset, and an extent. It is up to the VFS layer to translate this into physical mappings. LFS has the same issues, and these were solved well over a decade ago.

    If you invoke mmap with MAP_PRIVATE, this actually makes it easier; if someone else updates the file then you just keep the existing mapping.

  • ZFS + Timemachine (Score:3, Interesting)

    by jbolden ( 176878 ) on Monday December 18, 2006 @10:31AM (#17285588) Homepage
    With ZFS we might be able to get some very powerful backup features into OSX. Most binaries files don't change most of their content, ZFS makes it possible to due meaningful differential backups on large binary files. So for example 200 versions of a word doc with sounds and pictures that got revised over 6 months get stored in maybe 3x the space of the last revision. Emails with the same attachments get stored in just a few k rather than taking a meg each.... If Apple has this all working together by 10.5 then TimeMachine will work far far better then people currently expect it to. A 50g drive will be backing up a terabyte of worth of files.

  • by masklinn ( 823351 ) < ... minus physicist> on Monday December 18, 2006 @10:33AM (#17285598)

    if it is not meant to replace HFS+, could there be any other reasons to support ZFS?

    The answer is that it probably is meant to replace HFS+, but since ZFS is not bootable yet (including for Solaris 10) Apple can take the time to introduce ZFS, build tools for easier management, and let people get familiar with the FS before they have to drop HFS+.

    HFS' lifetime has already stretched far beyond what it should have, it's time for Apple to think of its next generation FS, and ZFS is an extremely promising FS with heaps of amazing features Apple has already started to integrate into its UIs with Leopard (Time Machine + ZFS Snapshots anyone?).

    ZFS also shows strong promises as both a home and a server FS.

  • They already have UFS and don't make it really usable, even after making a big deal about it being updated to the latest version from FreeBSD in Panther. It's a shame, too, because while HFS+ has a lot of nifty features all of them could be emulated over UFS or ZFS or any other file system (by putting the hooks for applications like Spotlight in the vnode layer rather than the file system - the vnode layer already has most of the hooks Spotlight needs), it falls far behind UFS in terms of reliability.

    In fact HFS+ is *so* bad that if it wasn't for a couple of apps that absolutely freak out if they don't have their pet un-emulated feature I would have gone to UFS long since... even if I lost Spotlight completely. Until my Mac I had never run into a file system that wasn't so badly damaged as to be unbootable that coudn't be repaired by fsck... but apparently with HFS+ just running it "too full" can trash it, and I lost my system disk on my old G4 three months running because of that!

    So I wouldn't hold out any expectations of ZFS being implemented in any useful way. They already have better file systems than HFS+ and they're not using them.
  • by orgchartleafnode ( 665294 ) on Monday December 18, 2006 @11:27AM (#17286294)
    What I'd really like to see is both that kind of functionality along with NTFS's really excellent ACL permission system implemented.

    Mac OS X Server 10.4 (Tiger) already has this. See: []

    There is a "File Services" white paper linked off of he above page but here is the relavant marketing:

    New in Mac OS X Server v10.4 are access control lists (ACLs), providing flexible file system permissions that are fully compatible with Windows Server 2003 Active Directory environments and Windows XP clients.
  • Re:copy-on-write (Score:5, Interesting)

    by Midnight Thunder ( 17205 ) on Monday December 18, 2006 @11:29AM (#17286328) Homepage Journal

    Makes use of copy-on-write; rather than overwriting old data with new data, it writes new data to a new location and then overwrites the pointer to the old data

    Wouldn't that pose a problem for mmap?

    It may do, but like many things there are alternative approaches.

    From working on embedded hardware with flash memory, this makes me wonder whether possible addition of ZFS is meant to be for flash storage? Let me explain: flash memory has a fairly limited write-count, relative to hard disks, so to compensate for this memory is written in a circular fashion, to ensure that a given sector is written the least often possible. In addition to this, from what I can tell, Apple's main sales point are low profile computers and portables. The latter would benefit from flash storage as means of extending battery life, even if it is for a certain elements, such as for the OS which is accesed far more frequently than anything else on disk. Given this I wouldn't be surprised to see flash memory in future models of Apple portables, using ZFS, while HFS+ is still used for the hard disks.

    This is pure speculation, but I feel that it has a high probabilty of being near the mark.
  • by Anonymous Coward on Monday December 18, 2006 @11:35AM (#17286410)
    the X11 stuff is BS. what is apple's X server missing? Hardware acceleration, aqua integration, exporting? It works a heck of a lot better than Xming or other X-server on desktop servers. In fact, there was just an update about a month ago to improve stereoscopic applications.

    Apple needs X11 to get people building scientific apps on Linux and Solaris. Its actually one of the best X implementations I've used (, XFree86, Irix X Server)
  • by cyclomedia ( 882859 ) on Monday December 18, 2006 @11:46AM (#17286582) Homepage Journal
    I've been wandering about this and am insanely curious: if ZFS really does intelligently copy on write how far can it take it?

    for starters, does the FS "know" that i've just clicked "Save As" in my word processor? what about copy and pasting a file back into the same directory to make a local copy? Also? is it just within variations on the same file? if i have a particular setup exe on my system but forget, and download it again to the desktop surely the FS has no initial way of knowing that they are one and the same, does some funky heuristic happen?

    basically: does the OS's read/write/copy/delete functionality have to invoke copy-on-write via a FS API or is it built in for every single sector-sized chunk that gets stuffed into the FS?

    the next question is the one in my subject: how therefore do you define "capacity"? if i've got a bunch of files that take up 700mb on a ZFS device and try to back up to a (Joliet) CD will i get a message telling me that the CD doesnt have room? i can imagine this scenario being unlikely with optimised binary data (jpegs and mpegs) but if i'm backing up a dev environment with autobackups (main.c,main.c.bak.001,main.c.bak.002,etc.) and manually created and dated directory tree "snapshots" (dev,dev_backup_2006-12-18,dev_backup_2006-12-01,e tc.) then this could probably happen quite easily.

  • by csoto ( 220540 ) on Monday December 18, 2006 @11:47AM (#17286604)
    I can imagine that using zfs send/receive to export/import pools would be an extremely efficient/safe method of replicating data. Perhaps some sort of ".mac mirror" could work. This would make Time Machine exceptionally useful, and I'd definitely commit extra $ for .mac services (if reasonable) for this.
  • by Anonymous Coward on Monday December 18, 2006 @11:54AM (#17286716)
    ZFS is a very nice and promising filesystem but it still lacks one very important feature: decent support for offline backups. While it is possible to make an offline backup ('zfs send ...') restoring such a thing is extremely tedious since your only option is to import the entire backup back into your ZFS pool after which you can restore the snapshot (or parts of it).

    Not sure about you but I really wouldn't like to try and restore data from the /export/home slice (where all the userdata resides) this way on my company server!
  • by Bralkein ( 685733 ) on Monday December 18, 2006 @12:01PM (#17286828)
    Yes, that's the typical Apple solution: you can sort of use it, but if you really want to use it, you have to commit to using OS X. It's not a good proposition.
    Well, I would think that if you were going to move from Linux to an OS which supports ZFS, you would move to Solaris.

    I seriously doubt there will be an independent implementation of ZFS; that work would probably go into ext5. Even if there were (or if ZFS becomes GPL compatible), I doubt it will get much traction: Linux has had more powerful file systems than ext3 for many years, and people choose not to use them. Impressive feature lists don't make a better file system.
    I agree that ext3 isn't the best thing out there for Linux, and I don't even use it myself. However, I would suggest that the reason for so many people still using ext3 is that most other filesystems aren't better enough to encourage people to move away from ext3. Look at some of the posts under this story - you will find stories of people moving from Linux to Solaris, really just because they want the features of ZFS. There is demand for ZFS in the Linux kernel, and if it becomes a common filesystem on OSX, I predict the demand will only increase. I don't expect ext4 will satisfy this demand, either.
  • by xehonk ( 930376 ) on Monday December 18, 2006 @12:54PM (#17287760)
    I wish that was the case. I really don't want to set up my linux root partition on fuse.
    "Porting ZFS to Linux is complicated by incompatibilities between CDDL, the license its source is released under, and GPL, the license which governs the Linux kernel. To work around this problem the Google Summer of Code program is sponsoring a port of ZFS to Linux's FUSE system[10] so the filesystem will run in userspace instead. However, running a file system outside the kernel on Linux has significant perfomance impact." (from d=95110224#Platforms [])
  • Re:Secure Delete? (Score:1, Interesting)

    by Anonymous Coward on Monday December 18, 2006 @05:00PM (#17291626)
    Use encryption of the volume. Drop so called 'secure' delete.
  • Re:ZFS would be cool (Score:1, Interesting)

    by Anonymous Coward on Monday December 18, 2006 @06:28PM (#17293140)
    From what I've heard, future versions of Sun Cluster 2.x will support ZFS. I don't know in what fashion, exactly.
  • Re:Secure Delete? (Score:2, Interesting)

    by blank axolotl ( 917736 ) on Tuesday December 19, 2006 @03:58AM (#17297594)
    Well, according to the manual page for 'shred', you can't do that reliably on ANY filesystems such as
    • log-structured or journaled filesystems, such as those supplied with AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)
    • filesystems that write redundant data and carry on even if some writes fail, such as RAID-based filesystems
    • filesystems that make snapshots, such as Network Appliance's NFS server
    • filesystems that cache in temporary locations, such as NFS version 3 clients
    • compressed filesystems

    So in other words you can't reliably delete a file on many modern filesystems anyway (unless there are more advanced programs than shred?), and ZFS is no different. I think that melting your hard drive is the suggested solution.

When you are working hard, get up and retch every so often.