Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
OS X Businesses Operating Systems Sun Microsystems Apple

Sun CEO Says ZFS Will Be 'the File System' for OSX 384

Fjan11 writes "Sun's Jonathan Schwartz has announced that Apple will be making ZFS 'the file system' in Mac OS 10.5 Leopard. It's possible that Leopard's Time Machine feature will require ZFS to run, because ZFS has back-up and snapshots build right in to the filesystem as well as a host of other features. 'Rumors of Apple's interest in ZFS began in April 2006, when an OpenSolaris mailing list revealed that Apple had contacted Sun regarding porting ZFS to OS 10. The file system later began making appearances in Leopard builds. ZFS has a long list of improvements over Apple's current file system, Journaled HFS+.'"
This discussion has been archived. No new comments can be posted.

Sun CEO Says ZFS Will Be 'the File System' for OSX

Comments Filter:
  • I doubt it (Score:5, Insightful)

    by ceswiedler ( 165311 ) * <chris@swiedler.org> on Thursday June 07, 2007 @11:55AM (#19424521)
    Jobs is probably not happy about his thunder being stolen right before for the June 11th keynote

    I strongly doubt he didn't know about it. This is Jonathan Schwartz, not a OS X rumors blogger. At any rate, ZFS in OS X is Sun's thunder; Time Machine is Apple's thunder, and that's already announced. How many OS X users (other than slashdot readers) will care in the slightest about the underlying filesystem? What they care about are the features, like Time Machine, that it enables.
  • by Otterley ( 29945 ) on Thursday June 07, 2007 @12:00PM (#19424615)
    If ZFS is the default file system, it will mean that Time Machine (i.e. the snapshot feature) of 10.5 will be able to take snapshots without requiring a secondary file system to keep the copied (recoverable) blocks, as it does now with HFS+. To me, the secondary filesystem requirement makes Time Machine essentially useless on a laptop.
  • by Vellmont ( 569020 ) on Thursday June 07, 2007 @12:02PM (#19424643) Homepage

    5:1 that it's not the default root file system in Leopard.

    It would be foolish to make any new technology that touches so many other applications and parts of the OS the default when you don't have to. It's much smarter to make it an option and try to shake out any problems that arise. Then make it the default at a later date.

  • by multisync ( 218450 ) on Thursday June 07, 2007 @12:08PM (#19424753) Journal

    because ZFS has back-up and snapshots build right in to the filesystem


    I think Slashdot would benefit from adopting some of K5's approach to story submissions. The Firehose is a great start, but instead of simply saying yes or no, users should be able to give feedback to the submitter. The summary for this article is a great example. The submitter typed "build" instead of "built," resulting in an annoying distraction in an otherwise concise description of the story.

    Newspapers have Copy Editors (at least they used to; most seem not-too-bothered by spelling these days). It would be nice if interested Firehose users were given the opportunity to help make sure the summary was fit for publication before it hits the front page.

    I guess this should have been a journal entry, but it seemed like an opportune time to bring this up.
  • by N3WBI3 ( 595976 ) on Thursday June 07, 2007 @12:10PM (#19424785) Homepage
    This will make going from earlier versions of OSX to the new one more of a pain because the whole disk will have to be reformatted.
  • by Hes Nikke ( 237581 ) on Thursday June 07, 2007 @12:12PM (#19424827) Journal

    If they move to an open source file system, iTunes for Windows could easily include a ZFS driver.

    and since apple has all the rights and source to HFS(+)(Journaled) they can just as easily write a windows driver for it as well.
  • by r00t ( 33219 ) on Thursday June 07, 2007 @12:13PM (#19424869) Journal
    Performance:

    Suppose I want to access a file.

    First, the filesystem looks it up. This operation takes time proportional to the log of the directory size. Maybe you do better with hashes.

    On a case-sensitive (POSIX-compliant) filesystem, you're done. You have the file, or you can return an error code.

    On a case-insensitive filesystem, your done if you're lucky. If not lucky, you need to do a linear scan of the whole damn directory. Many places have a directory with some insane amount of files. Intentionally or not, it's common to go into the tens of thousands. A few places (running XFS mainly, sometimes Reiserfs) get into the millions.

    Because of the way directory listings are done (read then look up stats) you can generally square the above numbers. Ouch.

    I18N:

    Then there is the issue of internationalization. For example, consider "I" and "i". Some places have an uppercase with the dot, and other places have a lowercase without the dot. The rules for uppercasing and lowercasing differ from what most people are used to. Oh crap! This issue doesn't exist on a case-sensitive filesystem.

    Safety:

    App needs to make a file. App sees that file does not seem to exist. App writes file. Complex international case rules mean that no, the file DOES exist, and it gets clobbered.
  • by TheVelvetFlamebait ( 986083 ) on Thursday June 07, 2007 @12:35PM (#19425209) Journal
    If you don't like geeks discussing their experiences with technology, you could always, y'know, stop reading slashdot!
  • by Vancorps ( 746090 ) on Thursday June 07, 2007 @12:44PM (#19425355)
    You also don't "need" to defragment the files. The FS is perfectly happy filling in the gaps with additional files. Performance will suffer but it will indeed work reliably.
  • by Vellmont ( 569020 ) on Thursday June 07, 2007 @12:52PM (#19425481) Homepage

    Are you kidding? This is ZFS we're talking about.

    Right, I'm sure it won't wind up breaking some important application at the expense of adding a wiz-bang feature that 95% of the users couldn't care less about.

    I'm not a Mac user, but even if I could (maybe I can) add ZFS to my Linux workstation I wouldn't. I prefer stable and reliable over untested new features. I think most people feel the same way, so making the default something else makes a lot of sense. If ZFS is as great as you say, it will eventually become the default, and anything it breaks that's important will have been fixed.
  • by N3WBI3 ( 595976 ) on Thursday June 07, 2007 @12:56PM (#19425551) Homepage
    And the windows tool has *always* been crap which results in less stable and predictable systems.
  • Re:I doubt it (Score:3, Insightful)

    by BosstonesOwn ( 794949 ) on Thursday June 07, 2007 @01:00PM (#19425617)
    Also to add to your point.

    Their new 10.5 servers will probably be closely related to some products in the Sun stable at the moment. With a successful launch this could help start reversing the trend of server dominance.

    If mac starts adding more aspects of unix to the OS and gives it a pretty interface , windows small business server markets could be in trouble. I would personally like to see more of these boxes around.
  • by beezly ( 197427 ) on Thursday June 07, 2007 @01:29PM (#19425993)

    TimeMachine is a backup tool, not really a live versioning tool. That makes having a second volume a requirement. If you don't understand that, then you don't understand what backups are for.

    I can think a few good cases for "backing up" to the same physical device.

    Here are the reasons I back up data my employer's data:

    • Hardware failure. Disks die, tapes fail, etc. RAID helps guard against this, but it doesn't help you guard against disks that fail silently (ie, they corrupt data rather than return an error). Backups to the same device are generally useless in this case.
    • Software failure (bugs). Your OS writes garbage to the filesystem, or your application writes garbage. Backups to the same device can sometimes protect against data-loss in this case.
    • User error. User deletes files. Backups to the same device are good in this case.
    • Operator error. I delete files. Of course, this never happens! If it ever did, backups to the same device would probably mean I could restore the data quickly.
  • by DECS ( 891519 ) on Thursday June 07, 2007 @01:46PM (#19426205) Homepage Journal
    Remember that Apple's Macs are EFI Intel PCs now. You don't need LILO and GRUB to start up an operating system, as EFI provides a minimal but sophisticated environment for handling multiple boot devices and system launching. It's like the Sun/Apple OpenFirmware that Macs have always had.

    You'd only need those things to get Mac OS X running on a DOS PC, or when using ZFS with Linux, right?

    ---
    Microsoft Surface: the Fine Clothes of a Naked Empire [roughlydrafted.com]
    What happens when the core values of an empire are exposed as a fraud? Does it prompt it change? More likely, it results in the generation of more false information to cover up the embarrassing failings.
  • Re:oblig... (Score:3, Insightful)

    by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Thursday June 07, 2007 @02:01PM (#19426449) Homepage

    I use it at home for storing video files and have not suffered any data loss.
    The plural of anecdote is not data.

    ZFS does indeed look like the greatest thing since sliced bread, but if all I cared about `wasn't losing data', then I'd just stick with ext2 or fat. (But I care about more than that, which is why I use xfs, ext3, hfs+ and ntfs for various things.)

  • by alexhmit01 ( 104757 ) on Thursday June 07, 2007 @02:21PM (#19426833)
    Apple is "well known" for massive backwards compatibility updates... except they aren't... They always handle transitions over a couple of versions, intelligently bringing people along. They swapped processor architectures twice and each time brought people along with emulators, in the Intel case it wasn't faster than the fasted G5 machines, but those of us upgrading 3+ year old machines (Powerbook G4 1Ghz -> Macbook Pro in my case) found our PPC apps running faster and Intel code flying.

    We all expected the Intel migration to happen with 10.5, they shocked us when they did it off the 10.4 base.

    While they did abandon Mac OS to move to OS X, they provided a migration strategy (Carbon) and a compatibility layer (Classic). Classic support shipped with 10.0/10.1, 10.2, and was supported in 10.3 if you already had it, as well as 10.4 I think, but they kept classic for around 5 years, which gave everyone time to migrate to Carbon. Its unfortunate that there is no long-term Classic via Rosetta just from a classic application point of view, but they didn't leave anyone in the lurch.

    I expect 10.5 to introduce this OS, which will be useful for new installs, or for external drive arrays, especially for the Video market, but I wouldn't expect it to be the default. OS X has supported a Unix filed system, but defaulted to HFS+, because HFS+ was compatible with Mac OS, so you could dual-boot OS 9 and OS X for a good 2 years on new hardware to maintain compatibility. If they hadn't done that, they would have lost the Pro-Audio and Pro-Video markets that took a few years to get native OS X applications.

    Getting it in the wild and for professionals would help that market, while not breaking ANYONE's compatibility. Sometime in 10.5's lifetime they may ship new computers with it, or they may wait for 10.6 in two years. But giving everyone two years is plenty of time to get utilities and applications compatible with the new file system.

    The flashy consumer features are touted for the OS, but the underlying architecture has always followed a 2-cycle release. If you've used OS X Server for 10.2/10.3/10.4, you'd notice that they introduced stuff in one version with limited exposed functionality (with the rest via the Unix layer), enhanced the functionality in the next rev, and polished thereafter.

    The Apple Mail Server -> Cyrus migration was someone poorly handled, but mostly because AMS was garbage. But the 10.4 Mail tools are night and day beyond the 10.3 ones.

    They are actually far more careful than people give them credit for.

    The different is, they don't keep backward compatibility as a long-term goal, they do a two-stage migration, giving people 2-4 years to transition.
  • by teknopurge ( 199509 ) on Thursday June 07, 2007 @02:35PM (#19427083) Homepage

    Discuss.
  • by suggsjc ( 726146 ) on Thursday June 07, 2007 @04:04PM (#19428587) Homepage
    Ok, well we can agree to disagree but I would think that it have have very little to do with the question and everything with how your worded it.

    For example:
    If you asked do you want your software to "just work" or run the chance of having your computer burst into flames just to add some eye-candy/whatever? Then I would image that most people would opt for the stable approach.
    However, if you asked do you want to try out feature X? It will make your windows semi-transparent and ALL of your wildest dreams come true. It hasn't been fully tested, but it seems to be pretty stable. Then I would think you'd see a surprising number of semi-transparent windows.

    People really don't know and don't care about what happens behind the scenes. Also, I would think that most people would assume newer == better and that a higher version will be better, because hey...what can go wrong?
  • (I don't know what various filesystems actually do, this is just how I would assume it's done, at least on systems designed for case-insensitivity...ext2 or FFS probably would suffer from the issues you mention about scanning the whole directory.)

    On a case-insensitive filesystem, your done if you're lucky. If not lucky, you need to do a linear scan of the whole damn directory.
    And yet Windows and Mac OS have had case-insensitive filesystems for years and somehow they are usable, even with Unicode filenames.

    You can't restore the original case of a string afterwards, but you can always make it lowercase. This is called "case folding." You can fold two strings to a lowercase form, and then compare them for equality or whatnot. Works with Unicode, too.

    Then there is the issue of internationalization. For example, consider "I" and "i". Some places have an uppercase with the dot, and other places have a lowercase without the dot. The rules for uppercasing and lowercasing differ from what most people are used to. Oh crap! This issue doesn't exist on a case-sensitive filesystem.
    While folding Unicode chars is frequently presented as an unsolvable problem ("what do you do with the letter with the squiggly thing above it? Or converting that German capital 'B' thing to two lowercase 's' chars? There are MILLIONS OF THESE!") ... there are actually very few cases in the grand scheme of things. Most languages don't have upper and lower case, after all.

    Here's the whole list of characters that need to "folded" to a lowercase form, accounting for instances where it will cause the string to grow (like that German 'B' thing):

          http://www.unicode.org/Public/3.2-Update/CaseFoldi ng-3.2.0.txt [unicode.org]

    (And you can hash those chars too, so folding a string doesn't involve hundreds of conditionals.)

    If you don't care about Unicode, case folding an English ASCII char is 2 lines of C code, and a few more if you want extended ASCII.

    Once you have a filename, you can store it in the filesystem as the specifically-entered characters, so you don't lose the original casing, but also store with it a hash of the case-folded version. Now whenever you need to look up a specific filename, you case-fold it, hash that folded string, and look it up that way against the hash you previous calculated when creating the file. Now it's as fast as the case-sensitive filesystem, minus the overhead of folding a small string.

    Because of the way directory listings are done (read then look up stats) you can generally square the above numbers. Ouch.
    The way directory listings are done doesn't change...readdir() is the same in all cases, and your lookup is still a hash. If you had to scan, the first run is slow anyhow due to disk bandwidth and seek speeds, but then a modern OS can cache the inodes to speed this up for the next run.

    App needs to make a file. App sees that file does not seem to exist. App writes file. Complex international case rules mean that no, the file DOES exist, and it gets clobbered.
    I would think that stat(filename) would not report the file doesn't exist if open() would then clobber it, at least not for case-sensitivity issues.

    If your app decides about a file's existence by using readdir() until it finds it, and doesn't properly case-fold, and didn't call open() with O_EXCL, then not only did you go the long way about it, you got what you deserved for clobbering the file.

    Actually, if you don't just open(O_CREAT | O_EXCL) to check for existence and create if missing in one step, then you'll have an atomicity problem anyhow. Use the services the OS provides, they are there for a reason.

    --ryan.

  • by Scaba ( 183684 ) <joe@joefranDEBIANcia.com minus distro> on Thursday June 07, 2007 @05:47PM (#19430045)

    So, how exactly does one roll back changes to a file on an NTFS partition?

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...