Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
OS X Businesses Operating Systems Software Apple

ZFS Set To Eventually Play Larger Role in OSX 196

BlueMerle writes with the news that Sun's ZFS filesystem is going to see 'rudimentary support' under OSX Leopard. That's a stepping stone to bigger and better things, as the filesystem will eventually play a much larger role in Apple OS versions. AppleInsider reports: "The developer release, those people familiar with the matter say, is a telltale sign that Apple plans further adoption of ZFS under Mac OS X as the operating system matures. It's further believed that ZFS is a candidate to eventually succeed HFS+ as the default operating system for Mac OS X -- an unfulfilled claim already made in regard to Leopard by Sun's chief executive Jonathan Schwartz back in June. Unlike Apple's progression from HFS to HFS+, ZFS is not an incremental improvement to existing technology, but rather a fundamentally new approach to data management. It aims to provide simple administration, transactional semantics, end-to-end data integrity, and immense scalability."
This discussion has been archived. No new comments can be posted.

ZFS Set To Eventually Play Larger Role in OSX

Comments Filter:
  • by tomRakewell ( 412572 ) on Friday October 05, 2007 @09:52AM (#20867777)

    It's further believed that ZFS is a candidate to eventually succeed HFS+ as the default operating system for Mac OS X


    Macs are really going to stink if Apple changes their default operating system to ZFS. ZFS is a file system.
  • Buzz compliant (Score:3, Insightful)

    by suv4x4 ( 956391 ) on Friday October 05, 2007 @09:53AM (#20867795)
    end-to-end data integrity

    You can't talk about end-to-end data integrity when this is just a filesystem. It's only one tiny place where the data you store in said file system can wreck its integrity. Are there memory bus or in-memory check for integrity of data read from ZFS? What about applications?

    Also stop talking to ZFS. Very secret internal sources told me ZFS was supposed to be a bigger event in Leopard but Steve killed it because Sun scooped him. It has happened before folks!

    Don't scoop the Steve. You scoop the Steve and business is over.
    • Re:Buzz compliant (Score:4, Informative)

      by Anonymous Coward on Friday October 05, 2007 @10:21AM (#20868201)
      There is a in-memory checksum check for all data that is read, yes. If the checksum doesn't match ZFS tries to read the same data from another disk, in a mirror/RAID-Z setup.
    • Re: (Score:3, Insightful)

      by caseih ( 160668 )
      That's Steve's loss then. Too bad his own ego often gets in the way of things that could benefit the customer. Honestly, why should Sun really care what Jobs does with ZFS in the long run. Sure it'd be good for Sun in terms of publicity, and maybe even some royalties. But in the long run, I can't see it being that big of a deal for Sun.
    • Re:Buzz compliant (Score:5, Informative)

      by jadavis ( 473492 ) on Friday October 05, 2007 @10:35AM (#20868399)
      Are there memory bus or in-memory check for integrity of data read from ZFS? What about applications?

      They have defined what they mean by that claim already: they have a checksum (256-bit, I think) on every block, and that checksum is checked from the OS when the block is read.

      This will catch some errors that might otherwise go uncaught, which is important for servers that move a lot of data around.

      It will not catch a memory error at the wrong time, or a processor error that stores the wrong value, or an error in the brain of the person who reads the data from the screen.
      • Re:Buzz compliant (Score:5, Informative)

        by mikeee ( 137160 ) on Friday October 05, 2007 @11:05AM (#20868897)
        IIRC, the block checksums are stored in the inode, not with the individual blocks. It turns out that one of the main failure modes of modern disks isn't reading a few bits wrong, but missing slightly on a seek and actually returning the wrong block! Block-included checksums won't find this, since it's still a valid block...
        • Re: (Score:2, Informative)

          by kithrup ( 778358 )

          Not quite... ZFS stores a checksum with each block pointer. So wherever you have a structure that indicates where the data is, there's also a checksum of that data. This also means that the block pointers themselves are checksummed with their pointers. And so forth. The only one that doesn't have a checksum with the pointer is the top-level root pointer, and they have multiple copies of that for redundant checksumming.

          And yes, for true integrity, you need ECC memory, and ECC CPUs. I don't know if the

        • And that is why the block checksum is a function not only of the contents of the block but of its address on disk. If the disk picks up the wrong block, the address you input into the checksum won't match and the verify will fail.
    • by DAldredge ( 2353 )
      Why should business trust a platform from a company that lets petty crap like that drive product development?
      • by Jeremi ( 14640 )
        Why should business trust a platform from a company that lets petty crap like that drive product development?


        They shouldn't, if that is indeed the case. OTOH, you are only asking that question because you are trusting the words of some pseudonymous poster on Slashdot. Why do you trust that what he said happened, actually happened?

        • by DAldredge ( 2353 )
          Because such rumors are not new. It has been reported by multiple, realworld sources, that Apple did the exact same thing in regards to ATI.
  • Time Machine (Score:2, Interesting)

    by JayPee ( 4090 )
    This is awesome and I knew there had to be something more interesting behind Time Machine. While I'm not that impressed with how it appears it's going to work in 10.5, later versions of OS X, with full ZFS support, will make Time Machine damned near magical.
    • I notice ZFS has read-only support in Leopard--does this mean only that it is able to read a drive formatted in ZFS, if you happened to plug in such a drive? Or does this mean that the time machine data is being stored in a read only ZFS format? I would think the former (which would definitely be no big deal), because I don't see how the OS would be storing data in ZFS format unless it writes to it. Only having read support for ZFS, I would think that there is no way for the OS to record information in Z
      • Re: (Score:3, Interesting)

        by aliquis ( 678370 )
        No, time machine probably doesn't use ZFS atm and is implemented in some other way, read-only ZFS support are rather useless but probably easier to implement, and if Apple had got it all working (and Sun got it bootable, maybe they had now, it was a long time since I read about it) I guess they might had switched filesystems, or offer it as an option, or use it in timemachine.

        Anyway we will hopefully see it in a minor release update, I just hope they don't call it beta just to remove it later and not releas
    • Re:Time Machine (Score:4, Informative)

      by LKM ( 227954 ) on Friday October 05, 2007 @01:27PM (#20871305)
      Except Time Machine does not use ZFS.
  • Damnit! (Score:2, Funny)

    by pi_rules ( 123171 )
    Alright, who broke the comments? Seriously, I'm stuck in this "new" version and it doesn't make fuck-all of any sense to me.
    • Oddly enough, it was your comment here that made me finally curious enough to try the new comment system. Maybe I'm just sadistic.
  • a true end (Score:4, Informative)

    by gEvil (beta) ( 945888 ) on Friday October 05, 2007 @10:12AM (#20868071)
    Unless I'm mistaken, this will mean the true end to resource forks on the MacOS. For those of you who aren't familiar with them, resource forks were a part of a file under the Classic MacOS (OS 9 and before) that contained icon information, filetype and creator codes, etc. This part of the file was only supported under the HFS and HFS+ filesystems, meaning the resource fork would get lost if you copied a file to a non-HFS/HFS+ filesystem (this is why files copied to FAT filesystems in the old days often wouldn't reopen on a Mac. It also explains the "Mac OS X" folder with underscored-dot files from archives created with OS X's built-in zip utility). With OS X, Apple rolled the resource fork into the "data fork" portion of the file, meaning the information was still there for legacy purposes. However, this is only supported under apps that know where to find the information. This change has the potential to cause some headaches for shops that have legacy files spanning several decades. OTOH, I'll be glad to see it finally go...
    • Re:a true end (Score:4, Interesting)

      by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Friday October 05, 2007 @10:32AM (#20868359) Homepage Journal

      For those of you who aren't familiar with them, resource forks were a part of a file under the Classic MacOS (OS 9 and before) that contained icon information, filetype and creator codes, etc.

      I'll be happy to see them kill that obsolete feature. It's hard to implement everything-is-a-file semantics when some things are files, and others are combinations of random amounts of metadata.

    • by Henriok ( 6762 ) on Friday October 05, 2007 @10:40AM (#20868465)
      Hardly! ZFS have provisions for any number of "forks" in the file system, called "extended attributes" in ZFS. If Apple migrates to ZFS they have every chanse to use these attributes to provide for quite a seamless integration with previous filsystems. The file system is open source and Apple can prettymuch do what they like or need. Even NTFS have these features but MS seems to ignore them due to backwards compatability issues with FAT filsystems and Windows APIs

      You know.. Wikipedia is very handy to look these things up. Please do. http://en.wikipedia.org/wiki/ZFS [wikipedia.org]
      • by Trailer Trash ( 60756 ) on Friday October 05, 2007 @12:16PM (#20870187) Homepage

        You know.. Wikipedia is very handy to look these things up.

        Dude, we're still trying to get people to read the linked article. Let's not get too crazy.

      • by EvanED ( 569694 ) <evaned@gm3.14159ail.com minus pi> on Friday October 05, 2007 @01:24PM (#20871253)
        Even NTFS have these features but MS seems to ignore them due to backwards compatability issues with FAT filsystems and Windows APIs
        They are starting to do some stuff with them. The first major use I know of was with XP SP2. With that, when you downloaded a file from the internet, IE would mark it as such in an alternate stream. When the program was run, someone (I don't know who) would check for the presence of the stream and if it was there, would display a "this program came from an untrusted source, would you like to run it?" dialog.

        I would expect more uses as we move into the future, as Vista is pushing even heavier for NTFS (for instance, IIRC the installer didn't ask which file system I wanted to use and just formatted NTFS), and MS doesn't have to worry about, for instance, some 98 or ME user who upgraded to XP but is still running FAT so he didn't have to reformat. For my large partitions (~100 GB), I can't format as anything but NTFS. (I don't know about smaller ones; I have a 24 GB system partition but if I try to bring up the format dialog there it complains that I'm trying to reformat the drive with the OS and I don't want to do that.)

        Personally, I think that there's a lot of awesome stuff that you could use extended attributes and alternate streams (WHY are these separate concepts on some file systems?!) if only they would be preserved when you move stuff around systems, upload them, etc., and am somewhat resentful at Unix and POSIX for the fact that for ages they didn't do this stuff and hence it's really hard to move to using them because no one supports them because there's no demand because people haven't thought of what to do with them because they haven't seen what can be done with them because no one uses them because no one supports them because... :-) I've often wondered what operating systems would be like if we kept the knowledge of the last decades, but threw out everything that we had now and started from scratch without worry of backwards compatibility, and this is one of the things I would like to see change.
    • Re: (Score:3, Informative)

      by nine-times ( 778537 )

      With OS X, Apple rolled the resource fork into the "data fork" portion of the file, meaning the information was still there for legacy purposes.

      That doesn't sound right to me-- or at least I'm not sure what you mean by that. OSX still has resource forks, but Apple basically told developers not to put important information in them anymore because they get lost so easily. They can't just push the resource fork into the data fork of the file, because in many formats there's essentially no space for that in

    • by SuperBanana ( 662181 ) on Friday October 05, 2007 @11:17AM (#20869131)
      Resource forks are far better than the idiotic "everything is a folder" model.

      Want to upload that Keynote project to your friendly CMS via a web browser? Can't, because it's not a file, it's a #@$!ing FOLDER. You have to zip it first. Words cannot accurately describe how tiresome this becomes.

      It also makes data recovery (should the file get accidentally deleted) nearly impossible- the files inside the folder are not named uniquely or in any identifiable manner.

      ZFS isn't nearly all it is cracked up to be- among other things, you can't expand RAID-Z...absolutely moronic. I'm not even sure you can expand a simple mirrored pool. Users have been repeatedly asking for growing abilities, and the developer reaction was "just create a larger pool and move it over". That's hilariously stupid advice given that you usually don't have that kind of storage hanging around- not even in enterprise environments.

      There's simply no comprehension amongst the ZFS developers that virtually EVERY raid card on the market supports such an operation. Even more shocking was when one developer said (paraphrasing) "gosh, how would one even go about doing that sort of thing?"

      Don't get me wrong- checksumming and automatic disk scrubbing are features long overdue, but ZFS is not magic bullet.

      • by kithrup ( 778358 ) on Friday October 05, 2007 @01:30PM (#20871341)

        It's true that you can't expand a RAID-Z set (I think, anyway -- if you replace all of the drives, one at a time, does that work?), but you can add another RAID-Z set, and expand the pool.

        That's the big thing in ZFS, combining all of the resources into a pool, rather than treating disks (or groups of disks) as part of a volume. The other part of this was making filesystems nearly as light-weight as directories.

        My plan is to use twinned drives, adding them as a mirror to the pool. I can replace each drive individually, let it re-silver, and then do the same with the other, to expand it, or I can simply add another pair of drives to the pool, and get more space that way. There are advantages and disadvantages to each.

        Oh, as for resource forks -- the model that Sun is choosing (as are some others) is that the extended attributes are treated as sub-files to a directory. I'm not sure that simply going to a directory is not a better idea, but that has a whole slew of its own problems. It's a bit ironic, really -- Apple had an idea from the beginning, and every application was prepared to deal with it, but nobody else did the same thing. Then, when Apple went with the flow, everyone else started trying to do what Apple did... and none of the applications are prepared for it.

        I'm not sure how it'll all turn out.

      • by g0at ( 135364 )
        Resource forks are far better than the idiotic "everything is a folder" model.
        Want to upload that Keynote project to your friendly CMS via a web browser? Can't, because it's not a file, it's a #@$!ing FOLDER. You have to zip it first. Words cannot accurately describe how tiresome this becomes.


        True, but you'd have had the same problem with a multi-fork file, since the web process would upload the data fork... HTTP forms don't know anything about multi-fork files.

        Better would be for the web browser to auto-zi
    • by rabtech ( 223758 )
      Not only can you use ZFS extended attributes for this but NTFS also supports true alternate data streams and has since its inception; this is how NT fileservers supported Mac clients natively (Services for Mac). You could copy a file to the NT share and back without losing any of the resource fork data.

      I'm sure there are other filesystems that have done the same thing in the past as well.
  • They made a big deal about the import of the latest UFS from FreeBSD in Panther, and their support for UFS was actually reduced in Tiger because they put the Spotlight hooks into HFS+ instead of using the hooks already in the vnode layer in Darwin.

    So don't do anything that would depend on them supporting ZFS.
    • by Ilgaz ( 86384 ) * on Friday October 05, 2007 @11:08AM (#20868963) Homepage
      I don't think UFS using community would be happy about Spotlight anyway.

      Spotlight in current form tries to index every single source file, huge framework headers and there is no practical way to stop it. I have tried the Privacy pane as suggested and no, it doesn't explain my 130 MB of spotlight metadata after installing Developer tools and couple of GNU libraries.

      If they have checked the NeXT history, they would figure the UFS is the default,supported Filesystem on NeXT. As OS X is a mix of NeXT with FreeBSD and Cocoa/Carbon, it is pretty natural that UFS gets into it a bit lately but finally.

      I can imagine what Apple needs for supporting ZFS on startup volumes. Complete metadata and resource support. They could be happy with their ext3 plain filesystem but Apple using professionals REALLY label their files, sometimes change their icons, sometimes has to FORCE OS to open a file with a different version of suite (e.g. Quark 7 vs 6), add comments to them and professional software developers like Adobe still stores critical data on resource forks.

      If there is a way to make ZFS support all those features without huge hacks (like the ZIP _resource stuff), they would give up their HFS+. Another thing is, it must support every serious software (non hack) backwards. You may find yourself using a application from 2001 written in Carbon under OS X and only it can provide the tool you require.

      I am saying these since some elitists think Apple is backwards and stupid still supporting resource forks and implement special features to OS X just to give minimum compatibility with old applications.

      Before critising HFS+ and suggesting Apple to use plain, Unix filesystems, they should sit around in a professional environment such as a DTP house, Movie studio and see how all those "childish" "backwards" features are used by professionals in job.

      This is not a post against ZFS, I am just trying to explain why Apple can't magically move to another filesystem just because it has better features. Not even mentioning the "overhead" required by ZFS and the fact that there are some 2k/4k (Cinema) edit environments which you can't even enable journaling let alone adding another layer of overhead.

      Also while writing these, if I only used plain Unix tools without any "native Mac" Application, e.g. use OS X as Darwin with X11, UFS would be my choice of filesystem.
      • by kelnos ( 564113 )
        Pardon my potential ignorance, but I was under the impression that there was nothing you couldn't store in a resource fork that you couldn't also store in a 'normal' extended attribute that file systems like ext3, xfs, reiser, etc. have supported for some time. Is this not the case? Obviously there would need to be some 'conversion mechanism' in OSX to preserve your resfork/extattrs when moving between file systems, but that's just a detail.
      • by argent ( 18001 )
        I don't think UFS using community would be happy about Spotlight anyway.

        I'm an old-school UNIX guy who qualifies as part of "the UFS-using community" and Spotlight is Tiger's "killer app" for me.

        Apple already implemented support for resource forks and almost all the other metadata in HFS+ in arbitrary file systems. Yes, if you go down to the command line you can see some of that metadata exposed at that level. But that's the way it should be... if you're working at that level you need all the data to be enu
    • by flaming-opus ( 8186 ) on Friday October 05, 2007 @01:06PM (#20871033)
      True, but the capabilities of UFS don't really exceed HFS+. ZFS, on the other hand, is a thoroughly modern filesystem. UFS is just as rusty as HFS+.
      • True, but the capabilities of UFS don't really exceed HFS+

        In one way, at least, UFS is far better than HFS+.

        The internal redundancy in UFS means that so long as the basic file system structures (directories, inodes, and indirect blocks) are intact, it can be repaired. The idea of having file system damage in a bootable file system that can't be repaired by FSCK is all but inconceivable for UFS or any of its precursor file systems. In nearly 30 years working with UNIX, once FSCK was introduced I *never* had
        • Re: (Score:2, Insightful)

          by Anonymous Coward

          The internal redundancy in UFS means that so long as the basic file system structures (directories, inodes, and indirect blocks) are intact, it can be repaired.

          This has nothing to do with 'internal redundancy', it has to do with filesystem metadata not being as easy to damage. UFS maintains a kind of free list to allocate new blocks, whereas HFS+ and JFS and XFS use bitmap allocation. If you stomp on part of a bitmap it's way worse than stomping on part of a list. ZFS on the other hand has a tree of blocks and can keep a configurable number of redundant copies on each drive and there are sometimes older copies that exist depending on how full the filesystem an

  • The AppleInsider article is largely vacuous...

    Please do not bother with this debunking (via Macjournals) unless you are truly interested. Thanks.

    http://www.macjournals.com/news/2007/10/04.html#a79 [macjournals.com]
    • This wasn't a debunking, it was more of a whiny "AppleInsider gets way more traffic than we do, so we'll dump on what they have to say, even though we don't have anything to add to it."

      Despite the macjournals piece, ZFS is cool, and it is better than HFS+journaled, at some things. Deal with it.

    • by IvyKing ( 732111 )
      I thought the Macjournals article was at least as vacuous as the Apleinisider article. Furthermore, the author of the Macjournal article is obviously not very aware of how file systems interact with hard drives (e.g. soft errors) and not very up on the innards of ZFS.


      If HFS or HFS+ are so great, then why isn't there more interest in porting HFS or HFS+ to other OS's such as Linux or the BSD's?

  • I maintain: (Score:5, Interesting)

    by teknopurge ( 199509 ) on Friday October 05, 2007 @10:22AM (#20868221) Homepage
    Sun is the new Bell Labs.

    Watch for the robotics coming out, very quietly, from Sun in the next 10 years.
  • "We don't have an LVM layer to speak of, so we're going to build it into the file system."

    There are a lot of things to like about ZFS. The built-in LVM isn't one of them IMHO, but I can see where it might be attractive if either you don't already have an LVM subsystem or your existing LVM subsystem is complete crap.

    • Re: (Score:2, Informative)

      by lauwersw ( 727284 )

      Way easier to manage: only 2 commands! While now with an LVM you have to place your disks in the desired topology inside your LVM (RAID0, 1, 5, ...), format them, put a filesystem on, mount, file check, repair, whatever. With zfs you place disks in your pool and kinda mount part of it, that's it.

      There are some other things you could complain about: it makes less sense on hardware RAIDs with good management tools. They missed a chance to make it a distributed or clusterable file system (though they bought

    • by mikeee ( 137160 )
      Actually, a built-in LVM makes a lot of sense if you stop to think about it; many of the things a LVM does could benefit from information only the filesystem has.
  • My question is, why ZFS for the Mac? I mean, for 99% of people's uses, it seems like the most enticing features of ZFS are overkill, unless implementing it does not imply any load on the system if all the features are not being used, and they want to synch up FS development between all of their products, from iPod to XServe.

    That being said, they may have something up their sleeves, and forgive me if the connection between ZFS and my idea is tenuous. If it seems like a silly idea, I blame the overdose of

Bus error -- please leave by the rear door.

Working...