Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
OS X Businesses Operating Systems Sun Microsystems Apple

Sun CEO Says ZFS Will Be 'the File System' for OSX 384

Fjan11 writes "Sun's Jonathan Schwartz has announced that Apple will be making ZFS 'the file system' in Mac OS 10.5 Leopard. It's possible that Leopard's Time Machine feature will require ZFS to run, because ZFS has back-up and snapshots build right in to the filesystem as well as a host of other features. 'Rumors of Apple's interest in ZFS began in April 2006, when an OpenSolaris mailing list revealed that Apple had contacted Sun regarding porting ZFS to OS 10. The file system later began making appearances in Leopard builds. ZFS has a long list of improvements over Apple's current file system, Journaled HFS+.'"
This discussion has been archived. No new comments can be posted.

Sun CEO Says ZFS Will Be 'the File System' for OSX

Comments Filter:
  • I'm giving odds... (Score:5, Informative)

    by Telephone Sanitizer ( 989116 ) on Thursday June 07, 2007 @11:46AM (#19424375)
    Well, not in THIS forum. But elsewhere.

    5:1 that it's not the default root file system in Leopard.

    The first bootable release of ZFS (not "BUILD," but "RELEASE") isn't even due until the Fall.

    I'm not alone in this skepticism. See this Ars story, for example.
    http://arstechnica.com/journals/apple.ars/2007/06/ 06/sun-ceo-jonathan-schwartz-zfs-to-be-the-file-sy stem-in-leopard [arstechnica.com]
  • by dancingmad ( 128588 ) on Thursday June 07, 2007 @11:48AM (#19424393)
    He's already taken it back [sun.com], more or less:

    "I don't know Apple's product plans for Leopard so it certainly wouldn't be appropriate for me to confirm anything. [...] There certainly have been plenty of published reports from various sources that ZFS is in Leopard, I guess we will all have to wait until it is released to see if ZFS made it as the default, or if they simply announce that it will become the default in a future release."
  • No no no (Score:5, Informative)

    by Guanine ( 883175 ) on Thursday June 07, 2007 @11:50AM (#19424449)
    Then he retracted his statement, saying he didn't know if it was the _default_ or not. Here's his quote, from a link on Daring Fireball [daringfireball.net]:

    I don't know Apple's product plans for Leopard so it certainly wouldn't be appropriate for me to confirm anything. [...] There certainly have been plenty of published reports from various sources that ZFS is in Leopard, I guess we will all have to wait until it is released to see if ZFS made it as the default, or if they simply announce that it will become the default in a future release.


  • by target562 ( 623649 ) on Thursday June 07, 2007 @12:03PM (#19424665) Homepage
    Solaris having ZFS "soon"? Looks like an old link, as it's been part of Solaris 10 since last summer... My servers running it in production would be sad to hear if it wasn't...
  • by brunascle ( 994197 ) on Thursday June 07, 2007 @12:09PM (#19424767)

    Correct me if I'm wrong, but some sort of ZFS driver is in the Linux kernel
    i dont think there is (could be wrong). something about a licensing problem. but apparently some people have gotten it to work in linux using FUSE [wikipedia.org]. (more info [blogspot.com])
  • Re:Booting from ZFS? (Score:5, Informative)

    by AKAImBatman ( 238306 ) * <akaimbatman@gmaYEATSil.com minus poet> on Thursday June 07, 2007 @12:11PM (#19424797) Homepage Journal

    When was this functionality added?

    March 28th, 2007 at 19 hundred and 50 hours Zulu time [opensolaris.org]

  • Re:oblig... (Score:5, Informative)

    by kildurin ( 938538 ) on Thursday June 07, 2007 @12:12PM (#19424821)
    Its worth noting that most Sun instructors do not work for Sun. As someone who has implemented and is using ZFS, it really is as good as they say. I use it at home for storing video files and have not suffered any data loss.
  • Re:It WAS... (Score:3, Informative)

    by CthulhuDreamer ( 844223 ) on Thursday June 07, 2007 @12:12PM (#19424841)
    Microsoft crippled FAT32 in Windows 2000 and Windows XP. Both of these can only format a 32GB FAT32 drive, anything bigger requires NTFS (or a third-party formatter). FAT32 also has a 4GB file size limit which is an issue when dealing with large avi files and DVD rips.
  • by Anonymous Coward on Thursday June 07, 2007 @12:15PM (#19424891)
    (Correct me if I'm wrong, but some sort of ZFS driver is in the Linux kernel, and Sun is open sourcing Solaris.)


    ZFS Driver is being written to use Fuse - GPL and CDDL code can't be mixed due to GPL's restrictive nature. Sun opensourced Solaris almost two years ago. Everything is at opensolaris.org.
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday June 07, 2007 @12:15PM (#19424893) Homepage Journal

    (Yesyesyes, I know, ZFS is reliable that disk-recovery tools are not needed. And if you believe that, then you probably believed Microsoft when they said NTFS volumes never needed defragmentation).

    Well, it's true! NTFS volumes never need defragmentation! On the other hand, Microsoft provides you with a defragmenter service (at least in 2k and later) and allows you to defragment files on NTFS volumes... :D

  • Re:oblig... (Score:3, Informative)

    by kildurin ( 938538 ) on Thursday June 07, 2007 @12:21PM (#19424985)
    ZFS came out in Solaris 10 Update 2. (Sun is days away from releasing update 4.) It is currently bootable in OpenSolaris.
  • Re:oblig... (Score:4, Informative)

    by Anarke_Incarnate ( 733529 ) on Thursday June 07, 2007 @12:25PM (#19425057)
    It still has memory hogging issues as well as performance issues in certain areas. More kernel tuning will be needed to tame the beast that is ZFS. It is good for many things but it does not replace EVERYTHING just yet.
  • by larkost ( 79011 ) on Thursday June 07, 2007 @12:28PM (#19425113)
    TimeMachine is a backup tool, not really a live versioning tool. That makes having a second volume a requirement. If you don't understand that, then you don't understand what backups are for.

    I already know how TimeMachine is going to work (it was part of the filesystem presentation at last years WWDC... so I know it, but can't reveal it), and unless they have completely redone that entire system (which was quite elegant), then ZFS will not bring a single thing to it. I do know how ZFS could make that all really elegant, but Apple already has it covered on HFS+.
  • Re:Is that all? (Score:3, Informative)

    by kildurin ( 938538 ) on Thursday June 07, 2007 @12:32PM (#19425159)
    A couple. 1) ZFS does not require a format. It takes me around 2 seconds to create a pool on a raid disk. Also true on a 500G hard drive. 2) ZFS pools (or directories/partitions if you will) can span multiple drives. 3) ZFS pools can have volumes (or drives) added to them at will. This means that if I run out of space in my music folder, I can add storage to it by adding another drive and adding it to the pool where my music folder is. Hope that helps
  • Re:It WAS... (Score:3, Informative)

    by Vancorps ( 746090 ) on Thursday June 07, 2007 @12:34PM (#19425189)

    In Windows 2000 and XP you can format Fat32 up to 160gigs assuming you have the correct driver. With SP2 for XP you can format it up to 250gigs I believe. Most removable drives from Maxtor for instance were and are formatted Fat32.

    Technically you're right though, since most Linux distros can format fat32 up to 2tb. NTFS is vastly superior though so the issue has never really affected me personally. Of course on a Windows machines you don't have to use Microsoft's formatting or partitioning tools, you can always format fat32 up to 2tb on your linux box then put the disk into a Windows box and it will read it just fine. I can't imagine why you would want to do that but the option exists.

  • by 0xABADC0DA ( 867955 ) on Thursday June 07, 2007 @12:44PM (#19425369)
    Are you kidding? This is ZFS we're talking about.

    ZFS is several orders of magnitude better at streaming large files like are used in video editing, which is already a huge draw for Macs. Since it is copy-on-write, writes are done without seeking so are very fast and can be spread out across multiple drives in parallel. IIRC within a zfs pool (collection of drives) you can make different 'filesystems' mirrored or striped, so you can have a /video that is striped and ultra-fast whereas /home is mirrored and fault-tolerant.

    You can take your 100gb video and instantly say 'snapshot this' then make any number of changes to it and if you don't like it just revert back again. Contrast to every other filesystem (besides spirolog) where you have to make a 100gb copy as a backup -- which takes forever, so nobody does it unless they have to.

    You can drop in a new drive and say 'use this drive' and your existing filesystem instantly has more space available and it is more fault tolerant or faster or both. If you want to remove a drive you say 'dont use this drive' and you can still use the OS normally while it moves data off to other drives.

    Something like ZFS, that "touches so many other applications and parts of the OS" has to be the default. Otherwise you have to support two completely different ways of using the system. And that bloat and complication costs a lot more than just getting it right through extensive testing. If you are really worried about it, don't upgrade the OS for a while.
  • by kithrup ( 778358 ) on Thursday June 07, 2007 @12:52PM (#19425491)
    Sorry, but your description of the lookup process isn't right. First: lookup depends on the way directory entries are store. On UFS, it's a non-sorted array; in order to do a lookup, you need to (worst case) scan the entire directory. On VxFS, they use a hash, so first you hash the input, and then do run through the entries that have a matching hash. On HFS+, the catalog is stored as a B-tree, so you do compares to get to the right node, and then look through the node until you either find it or reach the end of the node. Second: None of those is affected by case-insensitivity. You simply do a case-insensitive compare each time. This is the difference between HFS+ and HFSX on Mac OS X: in the former, the key-compare function is a Unicode case-insensitive comparator; on the latter, it's just a memcmp. Third: your comment about "i" is a glyphing issue, not a character issue. Apple has a pretty good technote up on their HFS+ impelementation, and it describes the way the case insensitivity works. I recommend reading it.
  • Re:I doubt it (Score:1, Informative)

    by Anonymous Coward on Thursday June 07, 2007 @01:02PM (#19425643)
    That's easy, where to begin?

    ZFS is better than ntfs because of:
    Transactional operation (always consistent on disk, even in a power outage)
    Checksums(and ability to recover data in flight using RAID info)
    Dynamic striping
    Dynamic file block sizes
    Limitless size and quantity of pools/FS/snapshots
    Automatic parallelism

    Should I keep going?
  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Thursday June 07, 2007 @01:45PM (#19426191) Homepage
    Yesyesyes, I know, ZFS is reliable that disk-recovery tools are not needed.

    A common misconception. The "zfs scrub" command will scan the filesystem and try to correct any errors that are found (or panic the kernel); the difference is that ZFS can do this while the filesystem is mounted.
  • by Anonymous Coward on Thursday June 07, 2007 @01:53PM (#19426331)
    zfs has been bootable on x86 Solaris using grub for over a month now zfs boot [sun.com]
  • Re:oblig... (Score:3, Informative)

    by BosstonesOwn ( 794949 ) on Thursday June 07, 2007 @01:59PM (#19426411)
    On linux FUSE is all we can use because of the license restrictions. I use it on a Solaris box with a vmed red hat install it's not quite ready for prime time. And the licensing sucks because it won't get put into the kernel.

    Which I find really deplurable , I would love to have it in the kernel.

    And yeah id like an ifs driver but none exist ;) yet and I don't see any intent internally to even attempt it. as it stand I think I am maybe 1 of 8 people using ZFS on linux here. And that is only in the VM.
  • ZFS for FUSE (Score:3, Informative)

    by piojo ( 995934 ) on Thursday June 07, 2007 @02:07PM (#19426573)
    I am using ZFS-FUSE right now. On my gentoo system, many partitions are zfs, including /home, /var/tmp, /usr/share, /usr/portage, and /opt

    Because I have suffered some random corruptions in the past, even with ext3 ("This mp3 didn't used to have a skip there!"), I wanted the checksumming so that I can tell when I need to restore something from a backup.

    As a filesystem, it works completely, including creation of new filesystems, compression, checksums, etc. However, I've noticed a decrease in my system's general performance since installing zfs (probably due to it holding my home directory). Memory usage and mysterious CPU usage (I don't think it's checksumming) are the current disadvantages, but the author says it's still completely unoptimized.

    Should you try zfs-fuse? Definitely. But right now don't expect a performance gain.
  • by Tuzanor ( 125152 ) on Thursday June 07, 2007 @02:24PM (#19426887) Homepage

    You are right to a certain extent, but you have to realized that current file systems are old and clunky. For a desktop or a few non-critical servers moving to the new tech is a great idea. Down the road, when ZFS is more mature and understood, it's going to be a welcome addition to most production setups. If you ran real-world mission-critical prod setups needing high availability you'd understand.

    Imaging you have a huge medical database on several servers and are running out of disk space. To expand, you need to plug in new hard drives, create RAID setups, create partions, move data over, restart the database, verify again and again, downtimes, etc. You can easily and efficiently grow file systems (unless you're using an expensive piece of software like veritas volume manager. With zfs, all I need to do to expand disk space in current WORKING filesytems is:

    zpool add oraclefs mirror c1t1d0 c2t1d0

    No luns to deal with. No other filesystem bullshit. You have no idea how excited this makes me for services that require large amounts of growing storage.

    Read up on zfs here: zfs [slashdot.org]http://www.opensolaris.org/os/community/zfs/> It is the best thing to come out of Sun in a long time.

  • by Tsunayoshi ( 789351 ) <tsunayoshi@g m a i l . com> on Thursday June 07, 2007 @02:58PM (#19427439) Journal
    You linked to an article from 2005...ZFS has been in Solaris 10 since update 2 (06/06). New features, enhancements, and optimizations appeared in update 3 (11/06). It just will not be available as a booting FS until sometime in 2007.

    The OpenSolaris project is currently working on getting bootable ZFS support (available in the current release as experimental).
  • Re:oblig... (Score:5, Informative)

    by Cyberax ( 705495 ) on Thursday June 07, 2007 @03:12PM (#19427665)
    Windows filesystem kernel API (it's called IFS - Installable File Systems) is fairly well documented, and you can get free GPL2 headers for it (http://www.acc.umu.se/~bosse/ntifs.h) or buy IFS kit from Microsoft for about $109 (http://www.microsoft.com/whdc/DevTools/IFSKit/def ault.mspx). Unfortunately, IFS is a very complex API and there's only ONE good book about it.

    You definitely can port FS to Windows using only documented API, but it's a long and tedious process. I'm currently porting FUSE to Windows, so I know it :)
  • Re:ZFS for FUSE (Score:3, Informative)

    by morcego ( 260031 ) on Thursday June 07, 2007 @03:37PM (#19428093)

    I've noticed a decrease in my system's general performance since installing zfs


    A user space filesystem is not something I would expect performance from.
  • ZFS still has bugs (Score:5, Informative)

    by mlheur ( 212082 ) on Thursday June 07, 2007 @03:44PM (#19428213)
    For something that's only a year or so old (production wise), I don't trust it worth shit.
    We run Netbackup Enterprise on Solaris 10 - during our last round of upgrades we installed ZFS on our disk staging storage units. It replaced VxFS. The way disk staging storage units (DSSUs) work in Netbackup, the disk is always near 100% full form a unix perspective. Basically, any time more disk is needed, the oldest image that has been copied to tape is expired from disk, thus freeing up more room. However, ZFS's most prominent bug from our perspective is that during periods of high activity, if all blocks become allocated, it becomes impossible to unlink(2) a file. This causes the application to no longer be able to make space for new backup images.

    Going down the shell, try to rm a file and it comes back: rm failed, disk is full.

    Well, if the disk is full, and you can't rm because the disk is full, how do you free up space?

    Sun's response, truncate an unnecessary file using 'cat /dev/null > /path/to/file', then, once you have some blocks free, rm works (so does unlink).

    Ok - so how do you tell a compiled application to truncate an unnecessary file before unlinking it? You can't! How can you determine what an unnecessary file is? If you delete the image before expiring it from the catalog you get errors when you try to expire, so you end up with catalog corruption.

    All in all, this is a problem that should never have been introduced, let alone still exist after months of sending trace outputs and reproducing it in multiple environments. ZFS isn't ready for the real world.
  • by Anonymous Coward on Thursday June 07, 2007 @04:02PM (#19428553)
    Here's a link to it: HFS Plus Volume Format [apple.com]

    I likewise highly recommend it.
  • by SirNAOF ( 142265 ) on Thursday June 07, 2007 @04:03PM (#19428577)
    ZFS is not ready for prime time - at least not on Solaris.

    I setup ZFS on some SAN storage in a new system. The internal boot disks were mirrored UFS. When one of the HBAs fried, the SAN storage disappeared - and the system panic'd.

    Every reboot thereafter stopped in a panic. The ZFS subsystem panic'd the system at every boot when it couldn't find all its volumes. After calling Sun support, I found out that they need to do a massive code redesign to catch that issue, and it wouldn't be out for at least 6 months.

    I'm sure ZFS will be great - once they clean up these type of showstopper bugs.
  • by TheNetAvenger ( 624455 ) on Thursday June 07, 2007 @04:05PM (#19428605)
    Contrast to every other filesystem (besides spirolog) where you have to make a 100gb copy as a backup -- which takes forever, so nobody does it unless they have to.


    Except of course NTFS which has been doing crap like this for several years now. So I'm not sure how you can seriously say 'every other filesystem'.

    This announcement isn't as exciting to the Windows world as it is to the OS X and *nix world where the features being offered by ZFS have not been available or consolidated into one FS model.

    NTFS on the other hand has been doing this stuff for quite some time, although ZFS does raise the bar even beyond NTFS, as it makes the jump from Terabytes to Exibytes or unlimited storage, even though there is not much contrast in 'features' offered by the FS beyond storage limits.

    I actually hope OS X does get ZFS in place and makes it default, as the time machine features would benefit from it greatly, as Windows 2003 and Vista use features of NTFS to make its 'Previous Versions' quite painless in terms of performance. On other FS that are not ZFS or NTFS, implementing a feature like this would be a serious performance impact.

  • Re:oblig... (Score:2, Informative)

    by Anonymous Coward on Thursday June 07, 2007 @05:37PM (#19429923)
    there are no licensing issues with linux/zfs. The "issue" is that Alan Cox and Linus Torvaldes have a hot iron up their asses about compartmentalizing everything. ZFS is vertical -- it's in the fs layer, the driver layer, and the vfs layer.
  • by dbIII ( 701233 ) on Thursday June 07, 2007 @05:57PM (#19430197)
    True - NTFS still has a major role in the enterprise of making NFS from a 10 year old Sun over 10mb/s look fast.
  • by abhi_beckert ( 785219 ) on Thursday June 07, 2007 @06:17PM (#19430509)
    Time Machine is already fully functional (apart from a few gui glitches) in the current leopard developer builds, but ZFS isn't even available in Disk Utility (yet?). This doesn't mean ZFS won't be added at the last minute, but it certainly isn't required for Time Machine.
  • by JimDaGeek ( 983925 ) on Thursday June 07, 2007 @06:38PM (#19430789)

    The Move from Classic (OS 9) to OS X forced people to Recompile/Port or Die from obsoleteness
    Not completely true. You can still run classic code, if you really want. I think what Apple did was make cocoa [apple.com] so much better that developers wanted it and users demand it.

    Next it was the move from Power PC to Intel.
    Again, you can run PPC under Intel via Rosetta [apple.com]. Though getting a native Intel build always performs better. Some say they don't notice any difference. I disagree. For example, a PPC build of Photoshop is much slower than a Universal build of the new Photoshop.
    So Apple does leave backwards compatibility stuff there, however they make the new stuff so much better, that developers and users want to get it ASAP.

    Now compare this to Microsoft. While .Net with C# is an improvement for developers in productivity, there is really no gain/difference for users. For example I had to port a legacy VB 6 app to C#/.Net. The end users didn't know anything different about the app from their point of view. Just switching to .Net didn't make the app inherit any default functionality. Contrast this to Cocoa where an app get spell checking via NSSpellChecker.
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Thursday June 07, 2007 @07:08PM (#19431127)

    if all blocks become allocated, it becomes impossible to unlink(2) a file.

    The price of copy-on-write. Other systems with a copy-on-write file system [netapp.com] can exhibit this behavior.

  • by Phat_Tony ( 661117 ) on Thursday June 07, 2007 @09:44PM (#19432545)
    "PPC build of Photoshop is much slower than a Universal build of the new Photoshop"

    The old PPC builds of Photoshop are also much slower on PPC than the only universal version, CS3. They moved Photoshop from Codewarrior to Xcode between CS2 and CS3, and it's the most massive rewrite they've ever done. So you can't distinguish how much of the speed difference of CS3 over CS2 on Intel is due to it being Intel native, and how much is simply due to it being faster.
  • Re:oblig... (Score:3, Informative)

    by salimma ( 115327 ) on Thursday June 07, 2007 @10:04PM (#19432737) Homepage Journal
    Yes there is. ZFS is licensed under CDDL, which is not GPL2 compatible. Linus has so far refused to move to GPL3 when it comes out, so before that happens, there is a licensing issue.
  • by LKM ( 227954 ) on Friday June 08, 2007 @03:07AM (#19434499)

    No, no and no.

    • ZFS will have case-insensitive support [opensolaris.org] if it is used in Mac OS X (closed approved fast-track 05/09/2007).
    • HFS+ has case-sensitity as an option, so if you want to, you can have a case-sensitive Mac.
    • Case-sensitivity is a stupid idea for regular users and will never be on by default on Macs (there's no reason why "My Letter to Aunt Emma" should be a different file from "My letter to Aunt Emma")
  • by TheNetAvenger ( 624455 ) on Friday June 08, 2007 @10:01AM (#19436543)
    Not only am I surprised you wouldn't have looked this up before asking such stupid questions, but even more surprised that people are dumb enought to mark it as informative.

    In reference to your questions:

    NTFS has data checksums to detect and repair corruption caused by any component?
    You can add and remove disk space from an NTFS volume dynamically?
    NTFS does data-level journaling not to mention without the overhead of multiple writes of the data?
    NTFS can use compression without getting horrible fragmented or other negative side effects?
    NTFS snapshots do not affect performance of the normal system?
    NTFS has variable block sizes?
    NTFS is open source and took less than a decade to get support on multiple systems?


    Yes & No - All FS models implement checksuming features. Although, no it is not to the same checksum level as you are going for, although it is far less impressive or important than you seem to think it is.

    Yes - Dynamic adding and removing has been with NTFS for a long time. Vista even adds a newbie interface for everything from partition resizing to the old school featuers of dynamic volume spanning, RAID, etc.

    Yes - Go look up the original NT journal features from 1991, and the expanded features used in Vista.

    Yes - Compression offers no more fragmentation than normal NTFS writes. This is insane.

    Yes - Snapshots, oh yes, have you not heard of 'previous versions' or System restore, they are built on the NTFS's various snapshot abilities.

    No - NTFS does not support variable block sizes beyond the intial selection when formatting the volume.

    Yes - No, not open source, but I don't see MS suing anyone using it. *wink. I also see it being used on OS X, Linux, and other OSes without much trouble. It also was developed over the course of 1990-1992, and even the current versions in use in Vista only slightly vary because of the robust and extensible model NTFS was built upon.

    For a 1992 FS(NTFS), that STILL is ahead of MOST other FS avaiable, with the exception of a few features you can pick and chooses, you are making are really stupid argument here.

    I NEVER said NTFS was still superior, but was rather making a point that MANY of the features that make ZFS so attractive are features that have been in NTFS for a LONG TIME.

    If I was arguing NTFS was superior, I would have done a smart reply like yours comparing ZFS to NTFS:

    Does ZFS support encryption?
    Does ZFS have minimal CPU usage on small file writes?
    Does ZFS compression support multi-threading?
    Does ZFS accurately report in-use disk space, or does it have problems because of the snapshots reliance?
    Does ZFS support a high compression ratio?
    Does ZFS support Quotas?
    Does ZFS support 'online' pool recofiguring?

    As you notice, NTFS still has 'features' even ZFS doesn't if you want to pick each of them to death.

    Again I will state ZFS is a good set of ideas and does move the FS concepts forward by moving more of the models into the FS set. It also is 128bit and allows for almost infinite storage.

    ZFS has a lot of good things, but that doesn't mean that NTFS is an old dog or hasn't already been doing some of these features, even if they are not implemented in the same storage pool metaphor.

    So once again, for the Mac world, ZFS is an awesome way to go if they can get the performance in line with their needs. However it is STILL just catching up with NTFS which is very feature rich and very solid and won't be hitting any walls for storage sizes in the next 10-15 years.

    Am I not allowed to believe both ZFS and NTFS are good technologies?

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...