Sun CEO Says ZFS Will Be 'the File System' for OSX 384
Fjan11 writes "Sun's Jonathan Schwartz has announced that Apple will be making ZFS 'the file system' in Mac OS 10.5 Leopard. It's possible that Leopard's Time Machine feature will require ZFS to run, because ZFS has back-up and snapshots build right in to the filesystem as well as a host of other features. 'Rumors of Apple's interest in ZFS began in April 2006, when an OpenSolaris mailing list revealed that Apple had contacted Sun regarding porting ZFS to OS 10. The file system later began making appearances in Leopard builds. ZFS has a long list of improvements over Apple's current file system, Journaled HFS+.'"
I'm giving odds... (Score:5, Informative)
5:1 that it's not the default root file system in Leopard.
The first bootable release of ZFS (not "BUILD," but "RELEASE") isn't even due until the Fall.
I'm not alone in this skepticism. See this Ars story, for example.
http://arstechnica.com/journals/apple.ars/2007/06
He's already backpeddled (Score:5, Informative)
"I don't know Apple's product plans for Leopard so it certainly wouldn't be appropriate for me to confirm anything. [...] There certainly have been plenty of published reports from various sources that ZFS is in Leopard, I guess we will all have to wait until it is released to see if ZFS made it as the default, or if they simply announce that it will become the default in a future release."
No no no (Score:5, Informative)
Re:Switch all filesystems to ZFS... (Score:3, Informative)
Re:The were going to use Reiser (Score:4, Informative)
Re:Booting from ZFS? (Score:5, Informative)
March 28th, 2007 at 19 hundred and 50 hours Zulu time [opensolaris.org]
Re:oblig... (Score:5, Informative)
Re:It WAS... (Score:3, Informative)
Re:The were going to use Reiser (Score:1, Informative)
ZFS Driver is being written to use Fuse - GPL and CDDL code can't be mixed due to GPL's restrictive nature. Sun opensourced Solaris almost two years ago. Everything is at opensolaris.org.
Re:Oh, great: another DiskWarrior lag (Score:3, Informative)
Well, it's true! NTFS volumes never need defragmentation! On the other hand, Microsoft provides you with a defragmenter service (at least in 2k and later) and allows you to defragment files on NTFS volumes... :D
Re:oblig... (Score:3, Informative)
Re:oblig... (Score:4, Informative)
Re:Let's hope it's the truth (Score:4, Informative)
I already know how TimeMachine is going to work (it was part of the filesystem presentation at last years WWDC... so I know it, but can't reveal it), and unless they have completely redone that entire system (which was quite elegant), then ZFS will not bring a single thing to it. I do know how ZFS could make that all really elegant, but Apple already has it covered on HFS+.
Re:Is that all? (Score:3, Informative)
Re:It WAS... (Score:3, Informative)
In Windows 2000 and XP you can format Fat32 up to 160gigs assuming you have the correct driver. With SP2 for XP you can format it up to 250gigs I believe. Most removable drives from Maxtor for instance were and are formatted Fat32.
Technically you're right though, since most Linux distros can format fat32 up to 2tb. NTFS is vastly superior though so the issue has never really affected me personally. Of course on a Windows machines you don't have to use Microsoft's formatting or partitioning tools, you can always format fat32 up to 2tb on your linux box then put the disk into a Windows box and it will read it just fine. I can't imagine why you would want to do that but the option exists.
Re:I'm giving odds... (Score:5, Informative)
ZFS is several orders of magnitude better at streaming large files like are used in video editing, which is already a huge draw for Macs. Since it is copy-on-write, writes are done without seeking so are very fast and can be spread out across multiple drives in parallel. IIRC within a zfs pool (collection of drives) you can make different 'filesystems' mirrored or striped, so you can have a
You can take your 100gb video and instantly say 'snapshot this' then make any number of changes to it and if you don't like it just revert back again. Contrast to every other filesystem (besides spirolog) where you have to make a 100gb copy as a backup -- which takes forever, so nobody does it unless they have to.
You can drop in a new drive and say 'use this drive' and your existing filesystem instantly has more space available and it is more fault tolerant or faster or both. If you want to remove a drive you say 'dont use this drive' and you can still use the OS normally while it moves data off to other drives.
Something like ZFS, that "touches so many other applications and parts of the OS" has to be the default. Otherwise you have to support two completely different ways of using the system. And that bloat and complication costs a lot more than just getting it right through extensive testing. If you are really worried about it, don't upgrade the OS for a while.
Re:case-insensitive: performance, i18n, safety (Score:5, Informative)
Re:I doubt it (Score:1, Informative)
ZFS is better than ntfs because of:
Transactional operation (always consistent on disk, even in a power outage)
Checksums(and ability to recover data in flight using RAID info)
Dynamic striping
Dynamic file block sizes
Limitless size and quantity of pools/FS/snapshots
Automatic parallelism
Should I keep going?
Re:Oh, great: another DiskWarrior lag (Score:3, Informative)
A common misconception. The "zfs scrub" command will scan the filesystem and try to correct any errors that are found (or panic the kernel); the difference is that ZFS can do this while the filesystem is mounted.
Re:I'm giving odds... (Score:1, Informative)
Re:oblig... (Score:3, Informative)
Which I find really deplurable , I would love to have it in the kernel.
And yeah id like an ifs driver but none exist
ZFS for FUSE (Score:3, Informative)
Because I have suffered some random corruptions in the past, even with ext3 ("This mp3 didn't used to have a skip there!"), I wanted the checksumming so that I can tell when I need to restore something from a backup.
As a filesystem, it works completely, including creation of new filesystems, compression, checksums, etc. However, I've noticed a decrease in my system's general performance since installing zfs (probably due to it holding my home directory). Memory usage and mysterious CPU usage (I don't think it's checksumming) are the current disadvantages, but the author says it's still completely unoptimized.
Should you try zfs-fuse? Definitely. But right now don't expect a performance gain.
Re:I'm giving odds... (Score:4, Informative)
You are right to a certain extent, but you have to realized that current file systems are old and clunky. For a desktop or a few non-critical servers moving to the new tech is a great idea. Down the road, when ZFS is more mature and understood, it's going to be a welcome addition to most production setups. If you ran real-world mission-critical prod setups needing high availability you'd understand.
Imaging you have a huge medical database on several servers and are running out of disk space. To expand, you need to plug in new hard drives, create RAID setups, create partions, move data over, restart the database, verify again and again, downtimes, etc. You can easily and efficiently grow file systems (unless you're using an expensive piece of software like veritas volume manager. With zfs, all I need to do to expand disk space in current WORKING filesytems is:
zpool add oraclefs mirror c1t1d0 c2t1d0
No luns to deal with. No other filesystem bullshit. You have no idea how excited this makes me for services that require large amounts of growing storage.
Read up on zfs here: zfs [slashdot.org]http://www.opensolaris.org/os/community/zfs/> It is the best thing to come out of Sun in a long time.
Re:Switch all filesystems to ZFS... (Score:3, Informative)
The OpenSolaris project is currently working on getting bootable ZFS support (available in the current release as experimental).
Re:oblig... (Score:5, Informative)
You definitely can port FS to Windows using only documented API, but it's a long and tedious process. I'm currently porting FUSE to Windows, so I know it
Re:ZFS for FUSE (Score:3, Informative)
A user space filesystem is not something I would expect performance from.
ZFS still has bugs (Score:5, Informative)
We run Netbackup Enterprise on Solaris 10 - during our last round of upgrades we installed ZFS on our disk staging storage units. It replaced VxFS. The way disk staging storage units (DSSUs) work in Netbackup, the disk is always near 100% full form a unix perspective. Basically, any time more disk is needed, the oldest image that has been copied to tape is expired from disk, thus freeing up more room. However, ZFS's most prominent bug from our perspective is that during periods of high activity, if all blocks become allocated, it becomes impossible to unlink(2) a file. This causes the application to no longer be able to make space for new backup images.
Going down the shell, try to rm a file and it comes back: rm failed, disk is full.
Well, if the disk is full, and you can't rm because the disk is full, how do you free up space?
Sun's response, truncate an unnecessary file using 'cat
Ok - so how do you tell a compiled application to truncate an unnecessary file before unlinking it? You can't! How can you determine what an unnecessary file is? If you delete the image before expiring it from the catalog you get errors when you try to expire, so you end up with catalog corruption.
All in all, this is a problem that should never have been introduced, let alone still exist after months of sending trace outputs and reproducing it in multiple environments. ZFS isn't ready for the real world.
Re:case-insensitive: performance, i18n, safety (Score:1, Informative)
I likewise highly recommend it.
ZFS not ready for prime time (Score:5, Informative)
I setup ZFS on some SAN storage in a new system. The internal boot disks were mirrored UFS. When one of the HBAs fried, the SAN storage disappeared - and the system panic'd.
Every reboot thereafter stopped in a panic. The ZFS subsystem panic'd the system at every boot when it couldn't find all its volumes. After calling Sun support, I found out that they need to do a massive code redesign to catch that issue, and it wouldn't be out for at least 6 months.
I'm sure ZFS will be great - once they clean up these type of showstopper bugs.
Re:I'm giving odds... (Score:2, Informative)
Except of course NTFS which has been doing crap like this for several years now. So I'm not sure how you can seriously say 'every other filesystem'.
This announcement isn't as exciting to the Windows world as it is to the OS X and *nix world where the features being offered by ZFS have not been available or consolidated into one FS model.
NTFS on the other hand has been doing this stuff for quite some time, although ZFS does raise the bar even beyond NTFS, as it makes the jump from Terabytes to Exibytes or unlimited storage, even though there is not much contrast in 'features' offered by the FS beyond storage limits.
I actually hope OS X does get ZFS in place and makes it default, as the time machine features would benefit from it greatly, as Windows 2003 and Vista use features of NTFS to make its 'Previous Versions' quite painless in terms of performance. On other FS that are not ZFS or NTFS, implementing a feature like this would be a serious performance impact.
Re:oblig... (Score:2, Informative)
Re:I'm giving odds... (Score:3, Informative)
Time Machine will not require ZFS (Score:3, Informative)
Re:I'm giving odds... (Score:5, Informative)
So Apple does leave backwards compatibility stuff there, however they make the new stuff so much better, that developers and users want to get it ASAP.
Now compare this to Microsoft. While
Re:ZFS still has bugs (Score:3, Informative)
The price of copy-on-write. Other systems with a copy-on-write file system [netapp.com] can exhibit this behavior.
Re:I'm giving odds... (Score:4, Informative)
The old PPC builds of Photoshop are also much slower on PPC than the only universal version, CS3. They moved Photoshop from Codewarrior to Xcode between CS2 and CS3, and it's the most massive rewrite they've ever done. So you can't distinguish how much of the speed difference of CS3 over CS2 on Intel is due to it being Intel native, and how much is simply due to it being faster.
Re:oblig... (Score:3, Informative)
"No" on all three counts (Score:3, Informative)
No, no and no.
Re:I'm giving odds... (Score:3, Informative)
In reference to your questions:
NTFS has data checksums to detect and repair corruption caused by any component?
You can add and remove disk space from an NTFS volume dynamically?
NTFS does data-level journaling not to mention without the overhead of multiple writes of the data?
NTFS can use compression without getting horrible fragmented or other negative side effects?
NTFS snapshots do not affect performance of the normal system?
NTFS has variable block sizes?
NTFS is open source and took less than a decade to get support on multiple systems?
Yes & No - All FS models implement checksuming features. Although, no it is not to the same checksum level as you are going for, although it is far less impressive or important than you seem to think it is.
Yes - Dynamic adding and removing has been with NTFS for a long time. Vista even adds a newbie interface for everything from partition resizing to the old school featuers of dynamic volume spanning, RAID, etc.
Yes - Go look up the original NT journal features from 1991, and the expanded features used in Vista.
Yes - Compression offers no more fragmentation than normal NTFS writes. This is insane.
Yes - Snapshots, oh yes, have you not heard of 'previous versions' or System restore, they are built on the NTFS's various snapshot abilities.
No - NTFS does not support variable block sizes beyond the intial selection when formatting the volume.
Yes - No, not open source, but I don't see MS suing anyone using it. *wink. I also see it being used on OS X, Linux, and other OSes without much trouble. It also was developed over the course of 1990-1992, and even the current versions in use in Vista only slightly vary because of the robust and extensible model NTFS was built upon.
For a 1992 FS(NTFS), that STILL is ahead of MOST other FS avaiable, with the exception of a few features you can pick and chooses, you are making are really stupid argument here.
I NEVER said NTFS was still superior, but was rather making a point that MANY of the features that make ZFS so attractive are features that have been in NTFS for a LONG TIME.
If I was arguing NTFS was superior, I would have done a smart reply like yours comparing ZFS to NTFS:
Does ZFS support encryption?
Does ZFS have minimal CPU usage on small file writes?
Does ZFS compression support multi-threading?
Does ZFS accurately report in-use disk space, or does it have problems because of the snapshots reliance?
Does ZFS support a high compression ratio?
Does ZFS support Quotas?
Does ZFS support 'online' pool recofiguring?
As you notice, NTFS still has 'features' even ZFS doesn't if you want to pick each of them to death.
Again I will state ZFS is a good set of ideas and does move the FS concepts forward by moving more of the models into the FS set. It also is 128bit and allows for almost infinite storage.
ZFS has a lot of good things, but that doesn't mean that NTFS is an old dog or hasn't already been doing some of these features, even if they are not implemented in the same storage pool metaphor.
So once again, for the Mac world, ZFS is an awesome way to go if they can get the performance in line with their needs. However it is STILL just catching up with NTFS which is very feature rich and very solid and won't be hitting any walls for storage sizes in the next 10-15 years.
Am I not allowed to believe both ZFS and NTFS are good technologies?