Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
OS X Businesses Operating Systems Software Utilities (Apple) Apple

Measuring Fragmentation in HFS+ 417

keyblob8K writes "Amit Singh takes a look at fragmentation in HFS+. The author provides numbers from his experiments on several HFS+ disks, and more interestingly he also provides the program he developed for this purpose. From his own limited testing, Apple's filesystem seems pretty solid in the fragmentation avoidance department. I gave hfsdebug a whirl on my 8-month-old iMac and the disk seems to be in good shape. I don't have much idea about ext2/3 or reiser, but I know that my NTFS disks are way more fragmented than this after similar amount of use."
This discussion has been archived. No new comments can be posted.

Measuring Fragmentation in HFS+

Comments Filter:
  • Huh? (Score:5, Insightful)

    by Anonymous Coward on Wednesday May 19, 2004 @01:04PM (#9196442)
    but I know that my NTFS disks are way more fragmented than this after similar amount of use

    Is this based off of instinct, actual data, or what?
    • Re:Huh? (Score:5, Funny)

      by lpangelrob2 ( 721920 ) on Wednesday May 19, 2004 @01:10PM (#9196491) Journal
      Maybe he ran defrag in windows and measured how many bright blue blocks were next to the medium blue blocks and the dark blue blocks. :-)
    • Re:Huh? (Score:5, Informative)

      by Ann Elk ( 668880 ) on Wednesday May 19, 2004 @01:39PM (#9196728)

      My own experience, using a small tool I wrote to analyze NTFS fragmentation:

      NTFS is pretty good at avoiding fragmentation when creating new files if the size of the file is set before it is written. In other words, if the file is created, the EOF set, and then the file data is written, NTFS does a good job of finding a set of contiguous clusters for the file data.

      NTFS does a poor job of avoiding fragmentation for files written sequentially. Consider a file retrieved with wget. An empty file is created, then the contents are written sequentially as it is read from the net. Odds are, the file data will be scattered all over the disk.

      Here's a concrete example. Today, I downloaded Andrew Morton's 2.6.6-mm4.tar.bz2 patch set. (Yes, I run WinXP on my Toshiba laptop -- deal with it.) Anyway, the file is less than 2.5MB, but it is allocated in 19 separate fragments. I copied it to another file, and that file is unfragmented. Since the copy command sets EOF before writing the data, NTFS can try ot allocate a contiguous run of clusters.

      Note - This was done on uncompressed NTFS. My feeling is that compressed NTFS is even worse about fragmentation, but I don't have any numbers to back that up.

      • Re:Huh? (Score:5, Insightful)

        by bfg9000 ( 726447 ) on Wednesday May 19, 2004 @02:34PM (#9197253) Homepage Journal
        (Yes, I run WinXP on my Toshiba laptop -- deal with it.)

        Why would anybody have a problem with you running Windows XP on your laptop? I'm a card-carrying Linux Zealot, and I don't have a problem with it.
        • Re:Huh? (Score:5, Funny)

          by EvilAlien ( 133134 ) on Wednesday May 19, 2004 @02:45PM (#9197332) Journal
          "'m a card-carrying Linux Zealot, and I don't have a problem with it."

          Apparently you are actually a closet Rational Linux Advocate. I'm sure there are a few people in the drooling horde reading these comments that will have a problem with someone being foolish enough to actually choose to run Windows on anything ;)

          I run Gentoo on my laptop, but the specs on the crusty old thing are so low that my only other "choice" would be the run Windows 95, and I'd sooner eat my usb key than do that.

          • Re:Huh? (Score:5, Interesting)

            by bogie ( 31020 ) on Wednesday May 19, 2004 @03:15PM (#9197575) Journal
            Actually the number of Windows users dwarfs the numbers of Linux users here these days. Sure Widows gets beatup on more because of the constant worms etc but have a look at the average "is linux ready for the desktop" thread. You get post after post of people critical of Linux on the desktop. At best some people will agree that linux is fine in some very specific situations. As I've said many times there is a reason why Slashdot won't show recent web browser statistics. My guess is it over 80% IE and not just because people are at work.

            For the record I also use XP on my laptop. Until everything works perfectly out of the box, ACPI and all, I'm not installing any nix on it.
            • A word about browsers (and any thing else that requires change):
              People, in general (more than 50% of them), prefer to resist change, and for that matter, extra work and/or thinking. It's just the way they are. It's what explains product loyalty. In this case, the product loyalty is browser based.

              In my job, as a web server support admin, I find that 95%, or more, of the people I speak with in support situations are not even aware of the alternatives available to them. In fact, just last Sunday, a frie

    • by Calibax ( 151875 ) * on Wednesday May 19, 2004 @01:41PM (#9196755)
      This is a very arcane procedure in XP. I shall try to explain, but only a professional should attempt this.

      1. Right click on drive icon, select properties
      2. Select Tools tab and click on "Defragment Now"
      3. Click on "Analyze"
      4. When analysis finishes, click on "View Report"

      This shows two list windows, one containing general properties of the disk such as volume size, free space, total fragmentation, file fragmentation and free space fragmentation. The second list shows all fragmented files and how badly they are fragmented.

      • by spectecjr ( 31235 ) on Wednesday May 19, 2004 @01:55PM (#9196854) Homepage
        This is a very arcane procedure in XP. I shall try to explain, but only a professional should attempt this.

        1. Right click on drive icon, select properties
        2. Select Tools tab and click on "Defragment Now"
        3. Click on "Analyze"
        4. When analysis finishes, click on "View Report"

        This shows two list windows, one containing general properties of the disk such as volume size, free space, total fragmentation, file fragmentation and free space fragmentation. The second list shows all fragmented files and how badly they are fragmented.

        If you're not using the same tool to measure fragmentation on each OS, how do you know that they're using the same semantics to decide what a fragmented file is?

        IIRC, the Linux tools use a different metric to calculate fragmentation than the NT ones.
      • by ProfessionalCookie ( 673314 ) on Wednesday May 19, 2004 @04:49PM (#9198545) Journal
        What's a "Right Click"?????

        -Faithful Macuser
        (ok I have a 3 button logitech)
      • I've managed to get my download drive (NTFS) so fragmented that the defrag tool in Win2k/XP is unable to defragment it, no matter how often you run it.

        The files on the drive had an average size of 200 MB, were downloaded in 1kB increments several files at a time over a period of a week on average per file.

        The reason for it failing on defraging (it doesn't say it fails, it just doesn't do much and stops after a while) is because the free space was also so badly fragmented that it couldn't even defragment a

  • HFS+ defrag source (Score:5, Informative)

    by revscat ( 35618 ) * on Wednesday May 19, 2004 @01:06PM (#9196458) Journal
    As mentioned in the article, HFS+ does defragging on the fly when files are opened if they are less than 20MB. The source code for this is available here [], as is a discussion about it that contains input from some Darwin developers.
    • That certainly sounds like a good idea. What are the trade-offs though? I guess it may take slightly longer to open a file, but that seems like it would be worth it in my opinion. Are there any other drawbacks? What about files larger than 20M?

      New deal processing engine online: []
      • by Joe5678 ( 135227 )
        I guess it may take slightly longer to open a file, but that seems like it would be worth it in my opinion.

        That would seem to defeat the purpose to me. The main reason you want to avoid fragmentation of the data is that fragmented data takes longer to pull from the disk. So if by preventing fragmentation you slow down pulling data from the disk, you have just defeated your purpose.
        • Umm. No, I believe you and the parent are both wrong. Defrag on the fly occurs when files are written to the disk, not during write operations. Fragmentation would slow down both read and write operations. I could be wrong though concerning my first point. :)
        • by Exitthree ( 646294 ) on Wednesday May 19, 2004 @01:20PM (#9196572) Homepage

          You've only defeated the purpose if you re-fragment the file again after opening it. If this isn't the case, the amortized cost (the initial cost of de-fragmentation when opening the first time minus the speed benefits from a file in a single chunk) over the many times the file is read yields a speed bonus, not a speed loss.

          A good example is me, installing a program from disk onto my computer. I run the program and it accesses a group of files that have been fragmented when copied to my hard drive. The first time it opens the files it spends a little extra time de-fragmenting them. However, subsequent times that I open the program, these files will load faster.

      • by Daniel_Staal ( 609844 ) <> on Wednesday May 19, 2004 @02:28PM (#9197201)

        I believe the actual sequence is this:

        1. Get request for file
        2. Open File
        3. Buffer file to memory
        4. Answer request for file
        5. If needed, defragment file

        In other words, it defrangments after the file has been returned to the program needing it, as a background process. The buffer to memory is a pre-existing optimization, so the only real trade off is the background processor usage goes up. If you aren't doing major work at the time, you'll never notice. (And if you are doing major work, you probably are using files larger than 20MB in size anyway.)

        Files larger than 20MB just aren't defragmented, unless you have another tool to do it.

    • Yeah, my freebsd system (FFS?) does the same thing and works quite well. I've never seen performance issues from its defragging.
      • by jimfrost ( 58153 ) * <> on Wednesday May 19, 2004 @03:47PM (#9197821) Homepage
        No, FFS does not do after-the-fact defragmentation. It attempts to allocate blocks that have low seek latency as files are extended. For the most part this avoids the problem entirely.

        If you ever wondered why there is a "soft limit" on FFS filesystems, the reason why is that its allocator's effectiveness breaks down at about the point where the filesystem is 90% full. So they sacrifice 10% of the filesystem space so that they can avoid fragmentation problems. It's not a bad tradeoff, particularly these days.

        I didn't know that HFS+ used an after-the-fact defragmentation system, but they've been around for awhile too. Significant research was done into such things as part of log-based filesystem research in the early 1990s (reference BSF LFS and Sprite). You had to have a "cleaner" process with those filesystems anyway (to pick up abandoned fragments of the log and return them to the free pool) so it made sense to have it also perform some optimization features.

    • by rharder ( 218037 )
      I still wish there was a reliable, preferrably included, defrag program. I do a lot of video work, and I not only want defragmented files, I want large contiguous sections of free space on my hard drive. I've not had much luck with 3rd party degrag programs (though I've yet to try SpeedDisk).
    • by shotfeel ( 235240 ) on Wednesday May 19, 2004 @01:17PM (#9196545)
      I thought this was a feature of Panther, not HFS+.

      HFS+ has been around since OS 8.5 (?? somewhere in OS 8). So either this is a feature of HFS+ that hasn't been implemented until now, or its a bit of code added to Panther. Or has HFS+ been updated?
      • by ahknight ( 128958 ) * on Wednesday May 19, 2004 @01:27PM (#9196624)
        As stated in the article, this is a feature of the HFS+ code in Panther. The filesystem cannot have a defrag feature as the filesystem is just a specification. The implementation of that specification, however, can do most anything to it. :)
      • by solios ( 53048 )
        HFS+ was one of the major features of the OS 8.1 update. OS 8.0 and earlier can't "see" HFS+ volumes- they see a tiny disk with a simpletext file titled "where have all my files gone?" which, if I remember correctly, gives a brief explanation that the disk is HFS+ and requires 8.1 or higher to view. :)

        Journalling didn't show up until one of the Jaguar updates, where it could be enabled via the command line on clients and via disk utility on Server.
        • by shamino0 ( 551710 ) on Wednesday May 19, 2004 @04:18PM (#9198203) Journal
          HFS+ was one of the major features of the OS 8.1 update. OS 8.0 and earlier can't "see" HFS+ volumes- they see a tiny disk with a simpletext file titled "where have all my files gone?" which, if I remember correctly, gives a brief explanation that the disk is HFS+ and requires 8.1 or higher to view. :)

          And the person who came up with this idea was a genius. This is far far better than what most other operating systems do (refuse to mount the volume.)

          If I boot MS-DOS on a machine that has FAT-32 or NTFS volumes, I simply don't find any volume. I can't tell the difference between an unsupported file system and an unformatted partition. If the file system would create a FAT-compatible read-only stub (like HFS+ does), it would be much better for the user. Instead of thinking you have a corrupt drive, you'd know that there is a file system that your OS can't read.

          • Which is why Apple is such a great company.

            At some companies, a developer would go to his project manager, propose this feature, and get a head shake. Too much work to test and spec, not worth the gains. Let's devote our time to our core competencies.

            Apple on the other hand was built on details like this. In fact, one of my favorite things about OS 10.3 is Expose...a feature nobody really asked for, and now I can't live without it (fuck virtual desktops...I want one desktop I can use!)
  • by SirChris ( 676927 ) on Wednesday May 19, 2004 @01:06PM (#9196463) Journal
    what type of file system is there where there is no main allocation table just a header then the file then a header then the file so you could theoretically break a disk and still read the half that was good because all pertinent information relating to a file was in one place?
    • by SideshowBob ( 82333 ) on Wednesday May 19, 2004 @02:05PM (#9196928)
      That isn't a filesystem that is a tape. Any number of tape systems exist, pick whichever one you like.
    • by AKAImBatman ( 238306 ) <akaimbatman@gm a i l . com> on Wednesday May 19, 2004 @02:25PM (#9197166) Homepage Journal
      There are a couple things that you have to consider. For one, if part of the disk corrupts, how will you identify a header? Or for that matter, how would you identify the header space vs. file space in a non-corrupted file system?

      You're probably thinking "just store the size of the file", This is perfectly valid, but it does have certain implications. You see, in Comp-Sci, we refer to a list like this as a "linked list". The concept basically being that each item in the list has information (i.e. a "link") that helps identify the next item in the list. Such a data structure has a worst case access time of O(n). Or in other words, if your item is at the end of the list,and you have you have 2000 files, you'll have to check through all two thousand headers before finding your file.

      Popular file systems circumvent this by using what's called a Tree structure. A tree is similar to a linked list, but allows for multiple links that point to children of the node. A node that has no children is referred to as a "leaf node". In a file system the directories and files are nodes of a tree, with files being leaf nodes. This configuration gives us two performance characteristics that we must calculate for:

      1. The maximum number of children in a node.
      2. The maximum depth of the tree.

      Let's call them "c" for children and "d" for depth. Our performance formula is now O(c*d) and is irrespective of the number of items in the data structure. Let's make up and example to run this calculation against:

      Path: /usr/local/bin/mybinary

      / (34) /usr (10) /usr/local (9) /usr/local/bin (72)

      Longest path: /usr/X11R6/include/X11

      Plugging the above numbers (72 for c, 4 for d) we get a worst case of 72*4 = 288 operations. Thus our worst case is much better than the linked list. And if we calculate the real case to access /usr/local/bin/mybinary, we get 34+10+9+72 = 134 operations.

      Hope this helps. :-)

  • by Anonymous Coward
    Goto My Computer. Right click the drive to be analyzed. Select tools/defragment now.../Analyze.

    This was my PhD Thesis.
    • Volume fragmentation
      Total fragmentation = 16 %
      File fragmentation = 33 %
      Free space fragmentation = 0 %

      I win!
      • by moonbender ( 547943 ) <moonbender@g[ ] ['mai' in gap]> on Wednesday May 19, 2004 @01:29PM (#9196648)
        I wrote a script some time ago to more easily let me check how badly my partitions are fragmented, here's it's current output:
        C: 5,72 GB Total, 1,97 GB (34%) Free, 4% Fragmented (8% file fragmentation)
        D: 40,00 GB Total, 1,00 GB (2%) Free, 41% Fragmented (82% file fragmentation)
        E: 66,69 GB Total, 105 MB (0%) Free, 10% Fragmented (21% file fragmentation)
        F: 30,00 GB Total, 1,21 GB (4%) Free, 3% Fragmented (7% file fragmentation)
        G: 10,00 GB Total, 1,54 GB (15%) Free, 5% Fragmented (9% file fragmentation)
        H: 35,03 GB Total, 551 MB (1%) Free, 39% Fragmented (79% file fragmentation)

        D ("Dump") and H ("Online") get a lot of throughput, by personal computing standards anyway, E ("Games") doesn't get changed that much, but if it does, a lot of data leaves and comes. Seems like whenever I defrag D or H, they're back to the values above within days. I guess Win XP has a hard time doing the internal on-the-fly defragging of the hard drives that rarely have moer than 1% free space... Guess I should just get a new HD and have some more free space that way - but I bet I'd have that filled up with junk after some weeks, anyway.

        That said, I'm not sure how relevant this is for NTFS partitions, anyway. I recall hearing that they aren't affected by fragmentation as much as FAT partitions (which were a nightmare), however I'm not sure if that means they don't fragment that easily (heh) or whether accessing data isn't slowed down as much by any existing fragmentation.

        I've also rarely heard anyone talking about fragmentation in the popular Linux file systems, a Unix partisan I know actually thought they didn't fragment full stop, which I don't believe is possible, at least not if you consider situations which might not occur in practice. But then again, I suppose Linux might solve it the same way Apple seems to - I guess I'll know more after a couple of hundred comments on this article. :)
  • NTFS is not so bad (Score:5, Interesting)

    by W2k ( 540424 ) <> on Wednesday May 19, 2004 @01:11PM (#9196498) Homepage Journal
    It must be pretty damn good if it can outdo NTFS. I have three computers with WinXP (NTFS 5.1) that I run quite a bit of data through on a daily basis, and neither needs to be defragmented very often at all (two of them have never needed defragmentation in more than a year of use). Mind you, I might fall into some special category of people who don't fall victim to fragmentation for some reason. Anyway, my point is, before you make remarks regarding how well this compares to NTFS, and/or how much "Microsoft sucks", consider how well NTFS still holds up considering its age. Another bonus is, I don't risk losing file system integrity if there's a power failure. ;)
    • by pjt33 ( 739471 )
      HFS+ is also journalled by default.
    • by MemoryDragon ( 544441 ) on Wednesday May 19, 2004 @01:18PM (#9196546)
      Ntfs does not fragment that strongly as long as you dont hit the 90% full mark of your disk, once you reach that, see the files becoming fragmented in no time. NTFS uses the open space for write access and then probably relocates the files in time, once it hits 90% the open space usage algorithm does not seem to work anymore.
    • Do you actually check your drives? I just got a laptop with XP and I've been using it for less than a month now. After reading this thread I though well my computer is fairly new but I'll see how it looks anyway. After running Disk Defragmenter and clicking analyze I get:
      Analysis is complete for: (C:)
      You should defragment this volume.

      I then looked at the report and found the following:
      Total fragmentation = 21%
      File fragmentation = 42%
      Free space fragmentation = 1%

      Pretty bad especially considering I've only
      • by itwerx ( 165526 )
        Pretty bad especially considering I've only had the laptop for less than a month...

        Any new machine will have an image dumped onto the hard-drive by the manufacturer.
        Most imaging apps don't bother with defragmenting so you probably started out with it fairly fragmented from the initial build of the image.
    • by 13Echo ( 209846 )
      NTFS isn't technically "as old" as you might think. Each version of NT over the past few years has added several upgrades to NTFS.

      NTFS has its strong points. It is reliable and has several extensions that make it quite flexible. On the other hand, it's not hard to "outdo NTFS" in some respects. There are many things that HFS+ and ReiserFS do better than NTFS. There are many things that NTFS does better.

      I think that NTFS is pretty good when it comes to cataloging chan
    • by Atomic Frog ( 28268 ) on Wednesday May 19, 2004 @03:02PM (#9197472)
      No, it doesn't take much to outdo NTFS.

      NTFS fragments _very_ fast on me, after a few months of use, it is in the 20% or more range.

      Same user (i.e. me), so same usage pattern, on my HPFS disks (yes, HPFS, that would be OS/2, not OS X), the fragmentation after 3 _years_ is less than 2% on ALL of my HPFS disks.
  • I've had a continued problem on my iBook for the past year or so.

    Under HFS+ in Mac OS X Jaguar or Panther, after about a day of having a clean install, fresh partition and format my hard drive starts making clunking noises and the system locks up (without actually freezing) -- then when reboot attempts are made they take aeons.

    Under ReiserFS in Gentoo Linux for PPC: never have the problem. Same hard drive. Months of use, never once hear the hard drive being funky. No lockups.

    Do I put the blame on HFS? OS
    • Clunk clunk from a drive is the drive saying "I cry out in pain! Replace me!"

      Seriously - it's likely that gentoo just isn't using the particular sector on the drive that OSX is - perhaps there is a file there that doesn't get accessed regularly or something. In any case, Clunk-Clunk is never ok.
    • What everybody else said. Back up your data and replace that drive NOW.
    • Grab smartmontools [] and run them on your drive (like "smartctl -a /dev/hda" or similar). Most SCSI and most newer ATA drives will maintain a SMART error log of any defects/problems. smartmontools will also print drive attributes (for most drives) that can tell you when a drive is about to fail, before it actually does.
  • My stats (Score:5, Informative)

    by Twirlip of the Mists ( 615030 ) <> on Wednesday May 19, 2004 @01:13PM (#9196509)
    I throw these out there for no real reason but the common interest.

    I've got a G4 with an 80 GB root drive which I use all day, every day. Well, almost. It's never had anything done to it, filesystem-maintenance-wise, since I last did an OS upgrade last fall, about eight months ago.
    Out of 319507 non-zero data forks total, 317386 (99.34 %) have no fragmentation.
    Not too shabby, methinks.
  • by Chuck Bucket ( 142633 ) on Wednesday May 19, 2004 @01:21PM (#9196583) Homepage Journal
    it's not how fragmented your disk is, it's what you can do with your fragmented disk that counts.

  • Panther Defrag (Score:5, Interesting)

    by stang7423 ( 601640 ) on Wednesday May 19, 2004 @01:23PM (#9196597)
    I'm sure someone else will point this out as well but its worth noting. In 10.3 there is kernel level defragmentation. When a file is accessed the kernel checks to see if its fragmented, then moves it to a area of the disk where it can exist unfragmented. I think there is a limitation to file size under 20MB but it may be higher. This still gets rid of a great deal of fragmenation. Just food for thought.
  • HPFS (Score:3, Interesting)

    by gmuslera ( 3436 ) on Wednesday May 19, 2004 @01:23PM (#9196599) Homepage Journal
    When i had OS/2, i enjoyed the low hpfs fragmentation. When you copy a file it gives to it the next free block that fit in that size, as long you have a big enough free chunk of the disk, the file were not fragmentated. But also it unfragmented when more operations where done with the directory or the file system. I remember that a basic "unfragment" script was to go thru all directories and just copy or even rename the files to unfragment them.

    But not sure how this are managed in linux filesystems, not just ext2/3 and reiserfs, but also in xfs and jfs.

  • aiee, here's my output:

    btreeReadNode(105): diff = 2048, this should *NEVER* have happened!
    initSpecialFile(202): failed to retrieve extents for the Catalog file.
    hfsdebug: failed to access the Catalog File.

    I can't find any info about this on the site. Is anyone else getting this error?

  • Recent 2.6-mm kernels contains Chris Mason's work in order to dramatically reduce the fragmentation of ReiserFS filesystems.

    It's really good on filesystems with a lot of files or on databases.

  • by greymond ( 539980 ) on Wednesday May 19, 2004 @01:25PM (#9196610) Homepage Journal
    Seriously, with NTFS and HFS+ I see very little fragmentation on both my Wintel and Apple machines.

    Both have 40gig HD's and both have applications installed/uninstalled quite often. My PC feels the worst of this as he gets games installed and uninstalled in addition to the apps.

    For example the last time I reinstalled either of these machines was back in january(new year fresh install) and since then my pc has felt the install/uninstal of various games usually ranging from 2-5 gigs each. The Apple has been installed and with the exception of updates, plugins, video codecs and basic small apps that get added/upgraded often has done alright.

    Right now Norton System Works on my PC is saying the drive is 4% fragmented. Disk Warrior on my Apple is saying the drive is 2% fragmented.

    Conclusion: Fragmentation is no longer an issue for the HOME USER(note how i'm not saying your companies network doesn't need to be concerned), unless there still running a FAT32 partition >. which well they deserve to have there computer explode at that point anyway.
  • by Prince Vegeta SSJ4 ( 718736 ) on Wednesday May 19, 2004 @01:29PM (#9196647)
    I just put my hard drive in my drier when it is fragmented. Since the group of unfragmented bits weighs more than the fragmented ones, The spinning action causes all of those stray bits to attach to the greater mass.
  • Defrag = placebo? (Score:3, Interesting)

    by justforaday ( 560408 ) on Wednesday May 19, 2004 @01:32PM (#9196670)
    I've often wondered if defragging and defrag utils are more of a placebo for people concernced with system performance. In my experience I've never noticed any perceivable difference after using a defrag util, on either OS8/9, OSX, 95, 98SE or XP. Then again, I've always made sure to have plenty of free space on my disks [and made sure they were fast enough] whenever I've done anything seriously disk intensive like multitrack audio...
    • Re:Defrag = placebo? (Score:5, Interesting)

      by Greyfox ( 87712 ) on Wednesday May 19, 2004 @02:01PM (#9196904) Homepage Journal
      It shouldn't really be an issue post-FAT. I think most people's obsesison with fragementation are a remnant of having to defragment FAT drives regularly. One did it superstitiously in those days because an overly fragmented filesystem did slow down considerably. No modern filesystem has an excuse for not handling fragmentation with no interference from the user.

      As a cute side note, I remember having to explain fragmentation to my high school FORTRAN class and teacher back in the '80's. I'd changed schools in my senior year and the new school had just started using the Apple II FORTRAN environment, which happened to be the same as the Aple II Pascal environment that I'd used at the previous school. The file system was incapable of slapping files into whatever blocks happened to be available (I'm not even sure it used blocks. Probably not...) so you would not be able to save your files if it was too fragmented, even if there was enough space available to do so. Ah those were the days...

      • Re:Defrag = placebo? (Score:3, Informative)

        by MyHair ( 589485 )
        It shouldn't really be an issue post-FAT. I think most people's obsesison with fragementation are a remnant of having to defragment FAT drives regularly. One did it superstitiously in those days because an overly fragmented filesystem did slow down considerably. No modern filesystem has an excuse for not handling fragmentation with no interference from the user.

        Head seek and rotational latency is still much slower than contiguous blocks. True, modern systems deal with it better, partially due to b-tree a
    • by NeedleSurfer ( 768029 ) on Wednesday May 19, 2004 @02:35PM (#9197263)
      Yes and no, it won't have any long time effect on your performance but there is a short time effect that can be usefull when dealing with audio. On a Mac, using a drive with block sizes of 64K to 256K (ideal when dealing with digital audio, as long as you set the buffer per track size of your daw to the same size as the blocks on your drive) you can gain up to 8 tracks by defraging your drive. Sometimes on large projects I have to record a file or playback the entire session in edit mode (no tracks frozen, everything real-time and not bounced), after editing for a while the daw refuses to play the project, lags, stutter or present some serious drop-outs, I defrag and this is where I get this 6-8 tracks headroom, but that will last only for a day of work and even then (Pro-Tools, Nuendo, Cubase, MOTU DP all present this caracteristic, as for the other I haven't tested them enough to provide meaningfull data).

      however, defraging is not the same for every defrag utility. For example, I was working with Avid Audiovision about 5-6 years ago on a TV show, it seems that defraging a drive hosting files created or edited with Audiovision with Speed Disk by Symantec would actually corrupt the entire projects contained on the drive (the biggest mistake and the only serious one I had in my career, I didn't loose my job but my boss did loose his temper, live and learn!), audio file were not readable at all after, it was actually a documented bug of Audiovision and I even think it was affecting every OMF files not just the ones used by Audiovision (not sure about this though), thats what happens when your boss won't let you RTFM. Only Disk Express, some Avid defrager or, later, Techtool could defrag those drives.

      On a side note, in the Classic mac (7-9.2), defragmenting your drive was also a way to prevent data corruption, actually its the other way around, not defraging would lead to data corruption. I don't know if its also the case with NTFS, EXT2 et al.
  • Disk Fragmentation (Score:5, Insightful)

    by List of FAILURES ( 769395 ) on Wednesday May 19, 2004 @01:33PM (#9196678) Journal
    A lot of people simply equate disk fragmentation with slow application execution and opening of data files. While this is the most visible effect that fragmentation has on a system, it's not the only one. If you are dealing with large files (multi track audio, video, databases) then you will get a different kind of performance hit due to the non-contiguous nature of the free space you are writing to. If you want to capture video with no dropouts, you really want a drive that has all of it's free space basically in one location. This allows you to write those large files with no physical disruption in location. Please do not think that the only benefit to unfragmented space is just "my programs launch faster". If you do any real kind of work on your system with large data files, you should know that a defragmented drive is a godsend.
  • by exwhyze ( 781211 ) on Wednesday May 19, 2004 @01:40PM (#9196735)
    Buzzsaw and Dirms [] -- I admit, the site looks a little seedy, but I've used both of these programs on several machines for upwards of a year and they've done a superb job of keeping my NTFS disks defragmented.
  • by djupedal ( 584558 ) on Wednesday May 19, 2004 @01:50PM (#9196820) 68 []

    Mac OS X: About Disk Optimization

    Do I need to optimize?

    You probably won't need to optimize at all if you use Mac OS X. Here's why:
  • Defragging XP now... (Score:3, Interesting)

    by PhilHibbs ( 4537 ) <> on Wednesday May 19, 2004 @02:03PM (#9196915) Homepage Journal
    My new laptop, with a 60GB 7200RPM disk, is under two months old, and I'm defragmenting it now. It's been running for 5 minutes, and is 3% complete, on a disk that is 62% full.

    20 minutes later, and it's on 17%. That's pretty damn fragmented, in my opinion.
    • by ewhac ( 5844 ) on Wednesday May 19, 2004 @02:44PM (#9197330) Homepage Journal

      No, it's just that the defragger built-in to Win2K/XP is shite. Its runs like molasses in liquid helium, and it almost never does a complete job in a single run. You have to run it several times in a row before it's even close to doing a reasonable job. And if it's your system drive, then there are some files (including the swap file) that it simply won't touch no matter how badly the blocks are scattered. This can be a real pain in the posterior if you're trying to defrag a drive in preparation for a Linux install.


  • my stats (Score:3, Interesting)

    by MyDixieWrecked ( 548719 ) on Wednesday May 19, 2004 @02:14PM (#9197021) Homepage Journal
    Out of 232167 non-zero data forks total, 231181 (99.58 %) have no fragmentation.
    Out of 6528 non-zero resource forks total, 6478 (99.23 %) have no fragmentation.

    Not bad. That's 8 months of heavy use since my last format.

    I gotta bring this to work today and see what that machine's like. My co-worker has been complaining that he doesn't have a defrag utility since he got OSX. I've been telling him that I don't think it matters. Now I can prove it to him.

    I remember back in the days of my Powermac 8100/80av, we would leave the 2 800mb drives defragging over the weekend because they had like 75% fragmentation.

  • Mostly because they end up re-installing the OS every year or so!
  • portable fragmenter (Score:3, Interesting)

    by harlows_monkeys ( 106428 ) on Wednesday May 19, 2004 @03:00PM (#9197447) Homepage
    Here's how you can write a portable fragmenter, if you need to get a disk very fragmented for testing.
    1. Create small files until you run out of disk space.
    2. Pick a few thousand of the small files at random, and delete them.
    3. Create a large file, large enough to fill the free space.
    4. Go back to #2, unless you are out of small files.
    5. Pick one of the large files, and delete it.

    Result: you have a bunch of large files, all very fragmented, and the free space is very fragmented.

  • by wardk ( 3037 ) on Wednesday May 19, 2004 @03:30PM (#9197690) Journal
    My reccollection of the OS/2 HPFS file system from IBM was that in many cases it would purposely fragment to take advantage of the disk spin, thus using fragmentation to increase performance.

    Defrag utils for OS/2 had options to only defrag if there were more than 3 extents, to avoid nullifying this effect.

    funny, years after the death of OS/2, it still kicks ass on much what we use now.
    • There used to be several disk access optimizations

      Vendors used to do interleaving with the format/fdisk commands I recall. The idea was that writing the sectors in a continuous stream was not very efficient as the drives of the time could not move data to or from the disk so quickly. You'd read sector 1, and by the time you were ready to read sector two, sector 3 was under the head, so you had to wait almost an entire disk revolution to find sector 2 again.
      The interleave told the OS to skip X physical dis
  • by jimfrost ( 58153 ) * <> on Wednesday May 19, 2004 @03:34PM (#9197729) Homepage
    Ok, we have this filesystem fragmentation buggaboo that's been plaguing MS-DOS and Windows for more than twenty years. We've got a whole industry built around building tools to fix it.

    That would be well and good if the problem were otherwise insurmountable. But, it turns out, we've known how to minimize, if not entirely eliminate, filesystem fragmentation for twenty years now - since the introduction of the BSD Fast File System.

    It doesn't take expensive (in time, if not in money) tools. All it takes is a moderately clever block allocation algorithm - one that tries to allocate a block close in seek time to the previous one, rather than just picking one at random.

    The fundamental insight that the authors of FFS had was that while there may only be one "optimal" block to pick for the next one in a file, there are tens of blocks that are "almost optimal" and hundreds that are "pretty darn good." This is because a filesystem is not a long linear row of storage bins, one after another, as it is treated by many simplistic filesystems. The bins are stacked on top of each other, and beside each other. While the bin right next to you might be "best", the one right next to that, or in another row beside the one you're on, or in another row above or below, is almost as good.

    The BSD folk decided to group nearby bins into collections and try to allocate from within collections. This organization is known as "cylinder groups" because of the appearance of the group on the disk as a cylinder. Free blocks are managed within cylinder groups rather than across the whole disk.

    It's a trivial concept, but very effective; fragmentation related delays on FFS systems are typically within 10% of optimum.

    This kind of effectiveness is, unfortunately, difficult to achieve when the geometry of the disk is unknown -- and with many modern disk systems the actual disk geometry is falsely reported (usually to work around limits or bugs in older controller software). There has been some research into auto-detecting geometry but an acceptable alternative is to simply group some number of adjacent blocks into an allocation cluster. In any case, many modern filesystems do something like this to minimize fragmentation-related latency.

    The gist of this is that Microsoft could have dramatically reduced the tendency towards fragmentation of any or all of their filesystems by doing nothing else but dropping in an improved block allocator, and done so with 100% backward compatibility (since there is no change to the on-disk format).

    Maybe it was reasonable for them to not bother to so extravagantly waste a few days of their developers' time with MS-DOS and FAT, seeing as they only milked that without significant improvement for eight or nine years, but it's hard to explain the omission when it came to Windows NT. NTFS is a derivative of HPFS which is a derivative of FFS. They had to have known about cylinder group optimizations.

    So the fact that, in 2004, we're still seeing problems with filesystem fragmentation absolutely pisses me off. There's no reason for it, and Microsoft in particular ought to be ashamed of themselves. It's ridiculous that I have to go and degragment my WinXP box every few months (which takes like 18 hours) when the FreeBSD box in the basement continues to run like a well-oiled machine despite the fact that it works with small files 24 hours a day, 365 days a year.

    Hey Microsoft: You guys have like fifty billion bucks in the bank (well, ok, 46 or 47 billion after all the antitrust suits) and yet you can't even duplicate the efforts of some hippy Berkeleyite some twenty years after the fact? What's up with that?

    (I mean "hippy Berkeleyite" in an affectionate way, Kirk. :-)

  • by Artifakt ( 700173 ) on Wednesday May 19, 2004 @04:29PM (#9198333)
    There are so many comments already posted to this topic that seem to not grasp the following point, that I think the best way to deal with it is to start a completely new thread. I'm sorry if it seems more than a little obvious to some of you:

    There are fundamentally only a few types of files when it comes to fragmentation.

    1. There are files that simply never change size, and once written don't get overwritten. (Type 1). Most programs are actually type 1, if you use sufficiently small values of never :-), such as until you would need to perform disk maintenace anyway for lots of other reasons in any 'reasonable' file system. A typical media file is probably Type 1 in 99%+ of cases.

    2. There are files that will often shorten or lengthen in use, for example a word processor document in .txt format, while it is stll being edited by its creator. (type 2). (That same document may behave as effectively Type 1 once it is finished, only to revert to type 2 when a second edition is created from it.)

    Of type 2, there are files of type 2a. Files that may get either longer or shorter with use, on a (relatively) random basis. (as a relatively simple case, a .doc file, that may become longer for obvious reasons like more text, but may also become longer for less obvious reasons (such as the hidden characters created when you make some text italic or underlined). (These are reasons that are not obvious to most end users, and often not predictable in detail even to people who understand them better). The default configuration for a Windows swap file is type 2a. It is likely to be hard for an automated system to predict the final size of Type 2a files, as that would imply a software system of near human level intelligence to detect patterns that are not obvious and invariant to a normal human mind. It may be possible to predict in some cases only because many users are unlikely to make certain mistakes, (i.e. cutting and pasting an entire second copy of a text file into itself is unusual, while duplicating a single sentence or word isn't).

    Then there are files of type 2b. Files that get longer or shorter only for predictable reasons, (such as a Windows .bmp, which will only get larger or smaller if the user changes the color depth or size of the image, and not if he just draws something else on the existing one.). A good portion of users (not all by any means) will learn
    what to expect for these files, which suggests a well-written defragger could theoretically also auto-predict the consequences of the changes a user is making).

    3. Then there are type 3 files, which only get longer. These too have predictable and unpredictable subtypes. Most log files for example, are set up to keep getting longer on a predictable basis when their associated program is run (type 3b). Anything that has been compressed (i.e. .zip) is hopefully a 3b, but only until it is run, then the contents may be of any type. A typical Microsoft patch is a 3a (it will somehow always end up longer overall, but you never know just what parts will vary or why).

    4. Type 4 would be files that always get smaller, but there are no known examples of this type :-).

    These types are basic in any system, as they are implied by fundamental physical constraints. However, many defrag programs use other types instead of starting from this model, often with poor results.

    In analyizing what happens with various defrag methods, such as reserving space for predicted expansion or defragging in the background/on the fly methods, the reader should try these various types (at least 1 through 3), and see what will happen when that method is used on each type. Then consider how many of those type files will be involved in the overall process, and how often.

    For example, Some versions of Microsoft Windows (tm) FAT32 defragger move files that have been accessed more than a certain number of times (typically f
  • Fast! (Score:3, Informative)

    by rixstep ( 611236 ) on Thursday May 20, 2004 @03:24AM (#9202079) Homepage
    One thing people rarely talk about is how fast HFS+ is. Or perhaps how slow UFS on the Mac with OS X is. But the difference is more than dramatic: a clean install of OS X using HFS+ can take less than half an hour - including the developers tools. The same procedure using UFS seems to never end.

    It might be the way they've 'frobbed' UFS for use with OS Server, but UFS really gives high priority to disk ops with GUI ops taking the back seat, and yet HFS+ is in comparison blazingly fast.

    I believe in a good clean machine like anyone, and I do see the probability DiskWarrior will be needed now and again, but the speed alone is quite a pedigree for HFS+ IMHO.

I took a fish head to the movies and I didn't have to pay. -- Fish Heads, Saturday Night Live, 1977.