Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
OS X Businesses Operating Systems Data Storage Sun Microsystems Apple

ZFS Shows Up in New Leopard Build 351

Udo Schmitz writes "As a follow-up to rumours from May this year, World of Apple has a screenshot showing Sun's Zettabyte File System in "the most recent Build of Mac OS X 10.5 Leopard". Though I still wonder: If it is not meant to replace HFS+, could there be any other reasons to support ZFS?"
This discussion has been archived. No new comments can be posted.

ZFS Shows Up in New Leopard Build

Comments Filter:
  • Zettabyte? (Score:3, Informative)

    by bigtomrodney ( 993427 ) * on Monday December 18, 2006 @10:04AM (#17285270)
    Isn't the term 'Zettabyte File System' actually inaccurate now? I thought they dropped that and ZFS now only remains as a pseudo initialism [wikipedia.org]
  • Re:ZFS would be cool (Score:2, Informative)

    by Anonymous Coward on Monday December 18, 2006 @10:17AM (#17285388)
    That would be nice, but since ZFS can't be used as boot partition even in Solaris (they'll fix it). it's better to let it stabilize a couple of releases (ZFS is a young FS even in Solaris, after all) and then switch.
  • by Boghog ( 910236 ) on Monday December 18, 2006 @10:23AM (#17285466)
    ZFS = snapshoots = Time Machine
    http://arstechnica.com/staff/fatbits.ars/2006/8/15 /4995/ [arstechnica.com]
  • Obligatory (Score:3, Informative)

    by value_added ( 719364 ) on Monday December 18, 2006 @10:25AM (#17285498)
    A clicky to the Wiki article on ZFS [wikipedia.org].
  • Re:copy-on-write (Score:3, Informative)

    by MotownAvi ( 204916 ) <.avi. .at. .drissman.com.> on Monday December 18, 2006 @10:27AM (#17285540) Homepage
    Why? In today's world, writing to an mmap-ed file most certainly doesn't hit the disk for each write. Instead, a block of memory from the buffer cache is used to hold the changes. The only difference is that instead of being backed (VM-wise) by the swap file, the block is backed by the mmap-ed file.

    There's no real change here for ZFS, and it's unlikely that anything at the memory cache level even knows about the copy-on-write-ness of ZFS (or even cares).

  • by 0racle ( 667029 ) on Monday December 18, 2006 @10:28AM (#17285554)
    Since we're nitpicking:
    The letter "Z" is properly pronounced "Zee" in the USA and Iraq (after 2003)
    That would correctly read "The letter "Z" is improperly pronounced "Zee" in the USA and Iraq (after 2003)"
  • Re:ZFS would be cool (Score:2, Informative)

    by furry_wookie ( 8361 ) on Monday December 18, 2006 @10:46AM (#17285768)
    Actually, you CAN use ZFS for everything except boot...so all you need is a tiny little grub boot partition and your golden. This is a tried and true method of booting NIXen with other filesystem formats.
  • by pesc ( 147035 ) on Monday December 18, 2006 @11:09AM (#17286080)
    I've never found plain-Jane posix permissions to be all that useful on anything other than the most basic of server environments. ...
    What I'd really like to see is both that kind of functionality along with NTFS's really excellent ACL permission system implemented.

    I wish you could read more about ZFS before suggesting how you could improve it by adding ACLs. It already supports them!

    http://blogs.sun.com/marks/entry/zfs_acls [sun.com]
  • by clarkcox3 ( 194009 ) <slashdot@clarkcox.com> on Monday December 18, 2006 @11:28AM (#17286306) Homepage
    Actually 'Zed' is probably closer to the source from which it comes - the Greek letter 'zeta'
    ... and "Bed" is closer than "Bee" to "Beta", yet everyone says "Bee". At least the American pronunciation of the alphabet is internally consistent. ;)
  • by Anonymous Coward on Monday December 18, 2006 @11:43AM (#17286538)
    Umm , last time I looked checksumming merely told you if there WAS data corruption , it doesn't prevent it.

    If you have a mirror (RAID 1), you have two copies of the data. When a process requests a read, it gets a copy of the data and does a checksum. If the checksum fails ZFS is smart enough to know that since there's a mirror there's a second copy of the data. It then goes to the second copy and checksums that. If it passes the data is passed to the process, and the bad data is updated with the good data. If the second copy fails its checksum then ZFS returns an error to the process and you've lost your data (which is no worse then what most file systems do today since they don't have checksumming).

    If your data is on a ZFS RAIDZ volume (~ RAID 5), then if the checksum fails you can rebuild the data from the parity information.

    Most (all?) file systems available today don't have built-in checksumming, so you when request/get data you actually don't know if it's valid. You simply assume that everything from the read from the disk is okay.
  • Re:copy-on-write (Score:3, Informative)

    by multipartmixed ( 163409 ) on Monday December 18, 2006 @11:44AM (#17286548) Homepage

    Similar issues exist without problems when mmap()ing files from NFS. You cannot update just a few bytes with NFS, you have to write the whole disk block out.

    I'm fairly confident that the current "standard" way to implement mmap at the moment is to update the pages, mark them dirty, and let the VM subsystem write them to disk.

    I haven't had to look at mmap's implementation in a long time, though.. but IIRC Rich Teer and/or Adrian Crocroft had good articles about it a few years back.

    Obviously, I arrive at this problem space with a huge Solaris influence. But this is a well-understood problem, and I don't think Sun's implementation is particularly revolutionary.
  • by Circuit Breaker ( 114482 ) on Monday December 18, 2006 @12:00PM (#17286814)
    A deadline scheduler (a-la ZFS) is wonderful when multitasking disk heavy apps. That does not happen too often on a laptop (or a desktop, for that matter), but I've had Windows (and on rare occasions, even Linux) work horribly under such load. ZFS' "worst case behaviour" is supposed to be significantly better than any other system in use today.

    NTFS's ACL system is horrible. While it has a lot of descriptive power, it's a pain to manage, the result being that it is almost never used. The old Unix model, while simple, is easy to manage, and as a result is often set up reasonably. Novell's "Trustees" model works much better than either, but for some reason it wasn't adopted by others.

    NTFS is slow and inefficient, fragments horribly, and lacks fundamental features such as proper symlinks (and only supports directory hardlinks). It has a reasonable journal implementation, and it supported large files before other systems did, but it's very outdated and does not compare favourably with any of the modern high performance file systems.
  • Re:What a moron (Score:5, Informative)

    by iPaul ( 559200 ) on Monday December 18, 2006 @12:33PM (#17287370) Homepage
    You both miss the point of HFS+ and ZFS. In Solaris ZFS has not replaced UFS. ZFS is an elegant way to manage large amounts of storage tied together with inexpensive and simple SATA drives. If you have one disk in your Mac, ZFS probably will not be your choice. HFS+ will work very well and be very easy to manage. A file server with 3 or 4 750 GB drives however, might be cut up so that part of the storage is mirrored for safety, limited for certain uses, and spanned over drives for size. For example, 3 750's could be divided into 1 TB unmirrored storage, 250 GB mirrored, a temp area of up to 100GB and the rest (650+ GB depending on temp are usage) held in reserve. In addition ZFS does quite a bit of error checking on the data to avoid any possible corruption during reads. However, it will never replace HFS+ on an iMac for your average user.
  • by shawnce ( 146129 ) on Monday December 18, 2006 @12:39PM (#17287478) Homepage
    I doubt Apple is going to require ZFS for Time Machine anytime soon. I fully expect HFS+ to continue to be the default file system for Mac OS X for a while (years). I believe ZFS support in Mac OS X is solely for the high-end / server space of the customer spectrum.
  • Re:Secure Delete? (Score:1, Informative)

    by Anonymous Coward on Monday December 18, 2006 @01:01PM (#17287906)
    think it would pose a problem for secure deletes.

    Secure deletes are already a 'future version' feature.

    http://www.serverwatch.com/tutorials/article.php/3 612066 [serverwatch.com]

    Not only secure delete but more general crypto support is planned

    http://opensolaris.org/os/community/os_user_groups /nlosug/nlosug-zfs-lofi.pdf [opensolaris.org] [ a pdf presentation on crypto features]
    http://www.opensolaris.org/os/community/os_user_gr oups/sgosug/10-zfs.pdf [opensolaris.org] [ more general pdf presentation on ZFS ]

    ZFS is relatively new (in comparison to most of the commonly used file systems). It isn't really "done" yet by any means.

  • Re:copy-on-write (Score:2, Informative)

    by Anonymous Coward on Monday December 18, 2006 @02:06PM (#17289050)
    Er.. a copy of the modified data is saved to a new location, as opposed to overwriting the original data, which is exactly what happens with COW memory pages in VMM's too. Sounds more like NetApp just like using a slightly different term to make them sound different :P
  • Re:copy-on-write (Score:3, Informative)

    by caseih ( 160668 ) on Monday December 18, 2006 @02:11PM (#17289126)
    ZFS currently wouldn't work very well for flash storage systems under a certain size because of initial overhead. ZFS requires each device to be at least 64 MB in order to be added to a pool. Also the minimal overhead of ZFS is 32 MB. In other words if you take a 64 MB disk, format it to ZFS, you'll only have 32 MB of space available. As you add devices to the pool, this overhead grows, but at a pretty small rate.
  • by Sparohok ( 318277 ) on Monday December 18, 2006 @02:18PM (#17289234)
    Hard drives silently losing data is a problem solved by RAID.

    That is profoundly wrong. Vanilla RAID will not discover and cannot automatically correct silent data loss. The reason is that RAID has no way of knowing which data is correct. For example, if two mirrored copies disagree on the contents of a block, the data is unrecoverable without manual intervention or external knowledge. Furthermore, in normal operation your RAID subsystem will simply read data from whichever drive is idle at the time the read request comes in; it does not ordinarily compare the two mirrors. The data will remain corrupted until the user notices a problem, at which point they have no practical recourse. Essentially the same problem occurs with parity RAID.

    There is no dedicated hardware in your system that provides the end to end data integrity that ZFS does. I honestly suggest you learn more about it before airing your opinions. Here is a start:

    http://blogs.sun.com/bonwick/entry/zfs_end_to_end_ data [sun.com]
  • by profplump ( 309017 ) <zach-slashjunk@kotlarek.com> on Monday December 18, 2006 @02:39PM (#17289572)
    If two mirrored copies disagree on the contents of a block, the data is unrecoverable without manual intervention or external knowledge.

    Or, you know, a checksum. Or more than one level of redundancy.

    I agree that RAID-1 cannot, by itself, correctly recover from error-free reads of mis-matched data. But RAID 5 and 6 are both capable of verifying the primary data source against the parity data and transparently correcting errors that occur on less than the critical number of disks. In the common configuration this is only done when a hardware-level error is detected to keep things fast, but it's quite possible to verify every read if so your system is so configured. Mutli-layer RAID also provides this same sort of protection.
  • by tbuskey ( 135499 ) on Monday December 18, 2006 @03:34PM (#17290340) Journal

    [ZFS] will be implemented for Linux pretty quickly.

    *sigh*. I wish. ZFS is being implemented on FUSE. This automatically creates limitations in performance and function (no root ZFS). IMO ZFS on FUSE will be a no starter in production.

    I don't think we'll see ZFS in the kernel proper either, given the history of incorporating XFS and ReiserFS 4. Along the same lines, DTRACE will probably never make it in. It's being cloned in the form of Systemtap.

    Meanwhile, FreeBSD has been porting ZFS and DTRACE. MacOSX is (partly) based on FreeBSD and DTRACE has shown up in MacOSX.

    I agree that ZFS is a good reason to convert a file server to Solaris from Linux. FreeBSD may become a good candidate also. I'm a Solaris admin and haven't done much with FreeBSD so I'll lean that way. I'd love to see ZFS in the Linux kernel, but I'm not waiting for it.

    Perhaps the way to go is Solaris x86 with ZFS file server then a BrandZ zone running Linux to provide other functions?

  • by jbolden ( 176878 ) on Monday December 18, 2006 @03:44PM (#17290480) Homepage
    does some funky heuristic happen?

    Yes, I used something similar to ZFS for mass document storage a few years back. You do a complex checksum on the block level. Any two blocks with the same sum are the same. Each unique block is only stored once, though multiple files might link to it. You're right the file system doesn't know why you are using the same blocks over and over, but it doesn't care.

    if i've got a bunch of files that take up 700mb on a ZFS device and try to back up to a (Joliet) CD will i get a message telling me that the CD doesnt have room?

    Assuming you have repetitive block, yes.
  • by toby ( 759 ) * on Monday December 18, 2006 @04:24PM (#17291104) Homepage Journal

    Over past months, I've read a lot of people commenting on ZFS who have no idea what it is. What it is, is the next generation of filesystems, not a "tweak" of current fs technology. It just happens to "look like" an ordinary POSIX fs, from a distance (if you ignore the administration/pool stuff...) But inside, it's something new under the Sun, folks.

    RAID experts don't grok it, because it does things RAID can't do (end-to-end).

    Devotees of ext2fs, reiserfs (yay!), NTFS (LOL!), or HFS+ don't grok it, because none of those filesystems do what ZFS does.

    Read about it before you write it off as old wine in a new bottle. To ask the question, "Does OS X need a new filesystem?" is a perfect example of missing the point. Once you've looked at what ZFS really brings to the table, you'll see why it's an inevitable future, sooner or later, and you'll stop looking foolish.

    Some links I posted this week: [google.com]

    - http://www.osnews.com/story.php/16739/Screenshot-Z FS-in-Leopard [osnews.com] - http://mac4ever.com/news/27485/zettabyte_sur_leopa rd/ [mac4ever.com] (older rumour http://www.osnews.com/story.php?news_id=14473 [osnews.com])

    For OS X people wondering why the fuss about ZFS - summaries include: - http://www.sun.com/2004-0914/feature/ [sun.com] - http://www.sun.com/bigadmin/features/articles/zfs_ part1.scalable.html [sun.com]

    "Why ZFS for home": - http://uadmin.blogspot.com/2006/05/why-zfs-for-hom e.html [blogspot.com]

    "Here are ten reasons why you'll want to reformat all of your systems and use ZFS.": http://www.tech-recipes.com/rx/1446/zfs_ten_reason s_to_reformat_your_ [tech-recipes.com]...

    And some more technical explanations from Chief Engineer: - http://blogs.sun.com/bonwick/entry/zfs_end_to_end_ data [sun.com] - http://blogs.sun.com/bonwick/entry/smokin_mirrors [sun.com]

"Let every man teach his son, teach his daughter, that labor is honorable." -- Robert G. Ingersoll