Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Apple

Apple Discontinues ZFS Project 329

Zaurus writes "Apple has replaced its ZFS project page with a notice that 'The ZFS project has been discontinued. The mailing list and repository will also be removed shortly.' Apple originally touted ZFS as a feature that would be available in Snow Leopard Server. A few months before release, all mention of ZFS was removed from the Apple web site and literature, and ZFS was notably absent from Snow Leopard Server at launch. Despite repeated attempts to get clarification about their plans from ZFS, Apple has not made any official statement regarding the matter. A zfs-macos Google group has been set up for members of Apple's zfs-discuss mailing list to migrate to, as many people had started using the unfinished ZFS port already. The call is out for developers who can continue the forked project." Daring Fireball suggests that Apple's decision could have been motivated by NetApp's patent lawsuit over ZFS.
This discussion has been archived. No new comments can be posted.

Apple Discontinues ZFS Project

Comments Filter:
  • by Lemming Mark ( 849014 ) on Friday October 23, 2009 @08:30PM (#29853221) Homepage

    Interesting - we're chugging happily along in Linux / Windows / Mac / Unix land having a load of competing filesystems where all the popular ones have *roughly* similar capabilities. Then ZFS appears in OpenSolaris and filesystem design becomes cool again. Everyone starts either porting ZFS or making filesystems with similar features ... Now a major player that actually *had* ported ZFS (somewhat) is seemingly deciding to go it alone. It seems as though the next-gen filesystem space is also going to have a variety of competing filesystems.

    I generally think this is a good thing, lets just hope that a reasonable degree of interoperability becomes possible anyway.

  • by BoneFlower ( 107640 ) <anniethebruce AT gmail DOT com> on Friday October 23, 2009 @08:32PM (#29853233) Journal

    When SSDs come down A LOT in price, and up in size, maybe.

    Go do a search on Newegg. Biggest they've got is 256GB, of those, the cheapest is $595. You can get several terabytes for that price with a magnetic hard drives.

    SSDs have a place, but as a general replacement for magnetic hard drives they are too expensive with too little capacity.

    There is also more to the file system than access speed.

  • by mysidia ( 191772 ) on Friday October 23, 2009 @08:33PM (#29853239)

    However, they still desperately need a next generation filesystem and according to the linked article they're hiring filesystem engineers.

    That doesn't make any sense.

    It only makes sense to engineer a new filesystem if the other options are inadequate or unusable.

    Engineering a new filesystem is hard and expensive.

    For them to seek to do that, they must have rejected the effort to integrate ZFS for some technical reason.

    The complexity of integrating ZFS pales into comparison to the massive cost of engineering and implementing a new filesystem from the ground up.

    Let-alone getting the new filesystem to a level of maturity where you can trust it with your data (safelty)

    I think the chance of Apple wanting to engineer a new FS so lightly are pretty slim.

    More likely, they would add new features to HFS+ or make an incremental update.

  • by Culture20 ( 968837 ) on Friday October 23, 2009 @08:38PM (#29853269)

    Why would they need different licensing terms?

    They probably wanted to rename it without changing it. Apple likes renaming things. Microsoft OTOH, loves using the same name as everyone else, and changing stuff to break interop.

  • by BoneFlower ( 107640 ) <anniethebruce AT gmail DOT com> on Friday October 23, 2009 @08:39PM (#29853277) Journal

    Well, technically, they do, I don't think WinFS has been nuked yet.

    It might as well be. Better odds of seeing Duke Nukem Forever.

  • by adolf ( 21054 ) <flodadolf@gmail.com> on Friday October 23, 2009 @08:43PM (#29853289) Journal

    Dear AC,

    Why should we place any higher value on your particular commentary, than all of the other rampant speculation which will be posted below by additional ACs?

    Best regards.

  • by Anonymous Coward on Friday October 23, 2009 @08:43PM (#29853293)

    Significant use (desktop systems) + the necessary need for OSes to write a lot to a drive = low life for an SSD, regardless of wear leveling.

    SSDs are only good for low-use computers (netbooks) and as a replacement for storage media like optical media where you do much more reading than writing.

    Until we get a device that doesn't have this issue, SSDs will NOT replace HDs. Please, PLEASE do not even consider killing plain HDs like this goddamn industry did to CRTs which are vastly superior to LCDs (even the best of which still have lag and motion tearing issues, no matter what people might say).

    If it's not broke, DON'T REPLACE IT.

  • by saleenS281 ( 859657 ) on Friday October 23, 2009 @08:55PM (#29853365) Homepage
    Why would Apple need different terms? CDDL and BSD are compatible, hence FreeBSD integrating ZFS. Furthermore, Apple already integrated DTRACE under the CDDL. Claiming a licensing issue doesn't make sense... at all. The only thing that does make sense is that Apple was trying to add a bunch of proprietary code to ZFS and didn't want to release their changes. Boo hoo.
  • by dangitman ( 862676 ) on Friday October 23, 2009 @09:04PM (#29853421)

    Microsoft obviously wasnt on board for any of this, and without the momentum behind ZFS it never will.

    Microsoft is never on board for anything useful, so I'm not sure it really makes any difference.

  • by Grishnakh ( 216268 ) on Friday October 23, 2009 @09:04PM (#29853423)

    How did Microsoft get into this? If MS ever adopted ZFS, they'd change a bunch of things just to make it intentionally incompatible.

    What's going to happen is the status quo will remain unchanged: every major vendor will use a different standard filesystem, and only Linux users will be able to read them all (though there may be a little time before they've fully developed the drivers to do so). After all Linux users can already use btrfs natively, and ZFS using FUSE. Anything new that Apple makes will likely be open-source since it'll be at the Darwin level so Linux can use that too (though perhaps only through FUSE because of licensing again), and anything MS makes will of course be reverse-engineered so it'll take the longest to support.

  • by Grishnakh ( 216268 ) on Friday October 23, 2009 @09:13PM (#29853467)

    Don't forget the power savings. I just got a new 24" LG panel with LED backlighting that, I believe, uses only 26W at full power. Compared to the old 19" CRT it replaced, it's a huge savings (that one used over 140W I think). Not only does this affect my power bill, but also the temperature in my poorly-ventilated office. Now it doesn't get so darn hot in there. (Yes, I could turn up the A/C, but then the rest of the house would be freezing, so that doesn't make much sense.)

    The picture is nice and bright, and as you'd expect with an LCD, not distorted at all, as I've found most large CRTs to be. Yes, CRTs definitely have better color (at least better than the 6-bit TN panels which are common for LCDs), but ones supporting 1900x1080 or better resolution are downright gigantic, and use a ton of power, and the picture seems to have distortion problems too. I really don't miss having to muck around with all the image adjustment (centering, size, moire, trapezoid, etc. etc.) controls that I had to on every CRT I used.

  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Friday October 23, 2009 @09:13PM (#29853469)
    Comment removed based on user account deletion
  • by fnj ( 64210 ) on Friday October 23, 2009 @09:27PM (#29853535)

    Too bad for Apple, not for ZFS. OpenSolaris and FreeBSD support ZFS just fine. I do think it's best suited to servers, and OpenSolaris and FreeBSD are greatly superior server operating systems anyway.

  • Comment removed (Score:2, Insightful)

    by account_deleted ( 4530225 ) on Friday October 23, 2009 @09:48PM (#29853635)
    Comment removed based on user account deletion
  • by EdIII ( 1114411 ) * on Friday October 23, 2009 @10:20PM (#29853777)

    Must not be as good as what you are smoking. You are being simplistic.

    Total IOPS on SSD's are higher, but that is mostly READ IOPS and not WRITE IOPS. SSD's have typically performed very poorly with random writes. Obviously you don't understand that they mostly refer to TOTAL IOPS because it sounds good. What about the write IOPS? You are conveniently leaving that out. That and the fact that in some cases read IOPS and write IOPS on some SSD's are not anywhere near each other. This creates problems in a server environment depending on what you want to do. Since we are talking about ZFS here, I doubt we are talking in the context of a home user.

    Quite recently write endurance and wear leveling were major concerns. There is no way that a generation 1 SSD could ever compete against SATA in the datacenter when you take everything into consideration. AFAIK, they still have not fixed the write performance in commodity SSD's on the market. FusioIO and OCZ have taken steps, expensive steps, to bring their offerings with much better write performance.

    Theoretically, if SSD was that much better we would see people making RAID setups with them and seeing stellar results. Yeah.... Quite a few people report that the performance was really slow when logic told them it was going to be really fast.... I wonder why.

    RAID can benefit you depending on what RAID you are using and WHY you are using it. You need to consider what you need first. Read performance? Fault tolerance? Write performance? Application requirements? You just can't throw an SSD at the problem and say, "Uhhh.. Solid State will solve the problem dawg".

    If you are trying to create a database server that is going to be doing hundreds of thousands of updates a day... you are NOT going to succeed doing it with SSD drives.

    Bottom line, if all you want is really high READ IOPS, and don't need the fast write capability, than yeah.. go with an SSD. Just don't blow smoke up people's butts by claiming its performance alone can negate the need for newer and higher performance filing systems. Puhleeeze.

  • by seanadams.com ( 463190 ) * on Friday October 23, 2009 @10:38PM (#29853835) Homepage

    You have gravely underestimated the capacity for lawyers and bean counters to fuck up a great idea.

  • Re:Correction (Score:3, Insightful)

    by melikamp ( 631205 ) on Friday October 23, 2009 @10:44PM (#29853865) Homepage Journal

    Merits close study, as the concepts of ZFS overtake current best practices

    That is, assuming that having the file system and the volume manager tied to each other is a good thing. I think I can come up with a bunch of reasons for why this is a pretty terrible idea and why modularity here is a good thing.

  • by Lemming Mark ( 849014 ) on Friday October 23, 2009 @10:47PM (#29853877) Homepage

    OMG I fail so hard!

  • by FiloEleven ( 602040 ) on Saturday October 24, 2009 @12:15AM (#29854189)

    Apple's CoreOS team includes several of the lead engineers from the ZFS project (who fled the remnants of Sun in the Schwartz melt-down), and the architect of the BeFS.

    If this (potentially) verifiable information is accurate, that along with the claim of sources are two things missing from most if not all of the other AC speculation. The scenario is plausible and credible, so it is reasonable in the absence of contrary evidence to lend more weight to the AC above.

    How much more weight will of course vary amongst individuals, but it seems the mods have deemed it worthy. Wisdom of crowds and all that.

  • Re:Correction (Score:5, Insightful)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Saturday October 24, 2009 @12:45AM (#29854323) Journal

    Modularity is a good thing, if you draw the modules in the correct place. And ZFS is indeed modular...

    Take all of this with a grain of salt, as I haven't actually used ZFS, only designed something similar, then found ZFS did it already, then decided I didn't care and used Linux anyway.

    My understanding is that there's a storage layer, where you can simply request storage for some purpose, and it gives you that storage with an id -- kind of like an inode. You could build a filesystem on top of that, managing all the metadata yourself. Or you could build something else -- a database, for example. And the filesystem layer still handles all kinds of things, like permissions, directories, etc, it's just that the allocation has been separated out.

    Thus, the allocation layer can make smart decisions about things like which disk to allocate something on, or what actually needs to be replicated, etc.

    For example: Think about any kind of RAID, even striping. If RAID can only work with whole volumes, that means the entire discs have to be synced (in RAID1) or checksummed/paritied (in RAID5), including free space. In ZFS, not only can you avoid replicating free space, but I believe you could also specify which files are important and which ones aren't -- and only the important ones are replicated, thus saving space on the ones which aren't.

    Another, more theoretical example: SSDs are a bunch of hacks on top of hacks. See, erasing and then writing to the same flash cell over and over wears it out, and there's no seek time. Largely because of Windows, I would guess, SSDs these days implement wear-leveling in the firmware, so that the OS sees only a logical disk that pretends to be a hard drive. But this means they always have to keep a number of cells unallocated, and it slows down writes to have to erase each cell before writing.

    So, someone came up with the ATA TRIM command, where the OS could tell the SSD which blocks are no longer in use, and the SSD can actually erase them.

    Compare this to the old solution -- implement wear-leveling in software. There were a few filesystems written to run directly on the flash, and it was actually the filesystem doing the wear-leveling. This meant the filesystem could intelligently spread new writes over free space, instead of having to keep some arbitrary number of blocks in reserve...

    This is getting fairly technical, and also boring, as ultimately, it's not that much of a difference. But this just shows the potential modularity of a system like ZFS. See, if ZFS separates out the allocation, that means you could replace that part without touching the filesystem, database, or anything else. And you could probably replace it with something that knows how to deal directly with a flash device -- something which, for example, erases blocks as ZFS snapshots are deleted.

  • by jone1941 ( 516270 ) <jone1941NO@SPAMgmail.com> on Saturday October 24, 2009 @01:10AM (#29854421)

    I'm sorry, perhaps I'm just a bit dense, but what is the benefit of a "ubiquitous file system" that is largely targeted for server infrastructures. Generally speaking I agree that parallel efforts to accomplish a similar / identical task can be deemed wasted efforts on some level. However, that trend is pretty much the standard for all open source projects? Linux vs BSD, WebKit vs Gecko, mysql vs postgres, php/perl/python/ruby the list goes on and on. There are a multitude of reasons projects (corporate backed or otherwise) choose to go their own way. In some cases I'm sure there are benefits to universalizing implementations of certain technologies (perhaps huge projects like gcc), but specifically server grade file systems? I just don't see what the big deal is. Yes ZFS has a large feature list, but clearly if there are patent concerns it's probably for the best that it didn't end up in the Linux kernel (or in OS X).

    When I hear that people are working btrfs I don't think "Oh no, this will only lead to server file system adoption fragmentation". Instead I think "that sounds like an interesting project for someone, I'll be sure to track it's progress and I look forward to seeing the benefit of it someday". When I heard that ZFS was not going to be merged into the linux kernel I wasn't particularly concerned. I'm sure that it provided useful features, but I'm also sure that there are a lot of intelligent people working on linux who could come up with something similarly useful if they felt it was worth while. Like I said, it's entirely possible that I'm missing the boat here, please feel free to correct me.

  • by Anonymous Coward on Saturday October 24, 2009 @09:59AM (#29856323)

    Your discussion about SAS, SSD, SATA notwithstanding, your dig on fibre channel makes no sense. You say that FC is dead, but offer no alternative architectures that provide what FC provides, such as multi-host connectivity, storage sharing among multiple hosts, storage mirroring over distance and replication.

    Dedicating 20TB of SAS, SSD, SATA, etc. to every host is simply wasteful, both from a cost, and a real capacity standpoint.

    Are you saying that it will all be replaced by file servers? That doesn't make any sense either, as file servers can't compete with the storage throughput that you mentioned.

    FC will be around. FCoE isn't even a reality, let alone a viable solution today. 10GE ports are coming down, but are very expensive as well. Can your system get any work done when dealing with 6.6 million (10ge/1500bytes per packet) interrupts per second? This limits 10GE significantly without expensive and complex HBA adapters.

    The new technologies you mention may combine into really high-performance solutions. However, I don't expect those technologies to combine for every server in a 1000 server data data center.

All great discoveries are made by mistake. -- Young

Working...