Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
OS X Businesses Operating Systems Software Utilities (Apple) Apple

Measuring Fragmentation in HFS+ 417

keyblob8K writes "Amit Singh takes a look at fragmentation in HFS+. The author provides numbers from his experiments on several HFS+ disks, and more interestingly he also provides the program he developed for this purpose. From his own limited testing, Apple's filesystem seems pretty solid in the fragmentation avoidance department. I gave hfsdebug a whirl on my 8-month-old iMac and the disk seems to be in good shape. I don't have much idea about ext2/3 or reiser, but I know that my NTFS disks are way more fragmented than this after similar amount of use."
This discussion has been archived. No new comments can be posted.

Measuring Fragmentation in HFS+

Comments Filter:
  • Huh? (Score:5, Insightful)

    by Anonymous Coward on Wednesday May 19, 2004 @01:04PM (#9196442)
    but I know that my NTFS disks are way more fragmented than this after similar amount of use

    Is this based off of instinct, actual data, or what?
  • by Joe5678 ( 135227 ) on Wednesday May 19, 2004 @01:14PM (#9196518)
    I guess it may take slightly longer to open a file, but that seems like it would be worth it in my opinion.

    That would seem to defeat the purpose to me. The main reason you want to avoid fragmentation of the data is that fragmented data takes longer to pull from the disk. So if by preventing fragmentation you slow down pulling data from the disk, you have just defeated your purpose.
  • Re:Give it a rest (Score:3, Insightful)

    by marktoml ( 48712 ) * <marktoml@hotmail.com> on Wednesday May 19, 2004 @01:19PM (#9196566) Homepage Journal
    Agreed and the fragmentation on NTFS can have subtle effects (such as fragmenting the MFT) that are NOT easily fixed by simply running a defragmentation tool.

  • by Daniel Dvorkin ( 106857 ) * on Wednesday May 19, 2004 @01:25PM (#9196616) Homepage Journal
    What everybody else said. Back up your data and replace that drive NOW.
  • Re:Anonymous (Score:3, Insightful)

    by EsbenMoseHansen ( 731150 ) on Wednesday May 19, 2004 @01:29PM (#9196644) Homepage
    The main problem with fragmentation is cache-faults. The disk drives assume that you will be reading the following sector;: when you don't, you'll have to wait for the sector you requested to be brought in from disk. This applies even in the face of the tricks you mention.
  • Disk Fragmentation (Score:5, Insightful)

    by List of FAILURES ( 769395 ) on Wednesday May 19, 2004 @01:33PM (#9196678) Journal
    A lot of people simply equate disk fragmentation with slow application execution and opening of data files. While this is the most visible effect that fragmentation has on a system, it's not the only one. If you are dealing with large files (multi track audio, video, databases) then you will get a different kind of performance hit due to the non-contiguous nature of the free space you are writing to. If you want to capture video with no dropouts, you really want a drive that has all of it's free space basically in one location. This allows you to write those large files with no physical disruption in location. Please do not think that the only benefit to unfragmented space is just "my programs launch faster". If you do any real kind of work on your system with large data files, you should know that a defragmented drive is a godsend.
  • sysinternals.com (Score:2, Insightful)

    by FatSean ( 18753 ) on Wednesday May 19, 2004 @01:36PM (#9196710) Homepage Journal
    I have a program from there that at startup will check the MFT, swapfiles and other important files and will make each one contiguous collection of disk blocks. Gotta be done then, as you can't lock them once Windows is completely up.
  • by Daytona955i ( 448665 ) <{moc.oohay} {ta} {42yugnnylf}> on Wednesday May 19, 2004 @01:47PM (#9196804)
    Do you actually check your drives? I just got a laptop with XP and I've been using it for less than a month now. After reading this thread I though well my computer is fairly new but I'll see how it looks anyway. After running Disk Defragmenter and clicking analyze I get:
    Analysis is complete for: (C:)
    You should defragment this volume.

    I then looked at the report and found the following:
    Total fragmentation = 21%
    File fragmentation = 42%
    Free space fragmentation = 1%

    Pretty bad especially considering I've only had the laptop for less than a month and that I still have over half the 40 gig drive free. Oh and HFS+ in panther is journaled so no loss of file system integrity there. So compared to HFS+ in panther, I'd say that NTFS in XP sucks! It would be interesting if someone has done similar research on ext3.
  • by 13Echo ( 209846 ) on Wednesday May 19, 2004 @01:49PM (#9196811) Homepage Journal
    NTFS isn't technically "as old" as you might think. Each version of NT over the past few years has added several upgrades to NTFS.

    http://en.wikipedia.org/wiki/NTFS

    NTFS has its strong points. It is reliable and has several extensions that make it quite flexible. On the other hand, it's not hard to "outdo NTFS" in some respects. There are many things that HFS+ and ReiserFS do better than NTFS. There are many things that NTFS does better.

    I think that NTFS is pretty good when it comes to cataloging changes to the drive. NTFS' speed leaves something to be disired though, and the nature of its fragmentation (though better than FAT32) still presents even more performance problems.
  • by Anonymous Coward on Wednesday May 19, 2004 @01:54PM (#9196846)
    windows is fragmented upon install.

    windows install process is a frigging mess copying thing to the drive, uncompressing thenm, then deleteing them... plus other really stupid way's of installing and setting the whole system up.

    if you want to increase your fragmentation to 50% install office 2003... that app will thrash the disks for a good 10 minutes during install...
  • by spectecjr ( 31235 ) on Wednesday May 19, 2004 @01:55PM (#9196854) Homepage
    This is a very arcane procedure in XP. I shall try to explain, but only a professional should attempt this.

    1. Right click on drive icon, select properties
    2. Select Tools tab and click on "Defragment Now"
    3. Click on "Analyze"
    4. When analysis finishes, click on "View Report"

    This shows two list windows, one containing general properties of the disk such as volume size, free space, total fragmentation, file fragmentation and free space fragmentation. The second list shows all fragmented files and how badly they are fragmented.


    If you're not using the same tool to measure fragmentation on each OS, how do you know that they're using the same semantics to decide what a fragmented file is?

    IIRC, the Linux tools use a different metric to calculate fragmentation than the NT ones.
  • by MattHaffner ( 101554 ) on Wednesday May 19, 2004 @02:04PM (#9196925)
    Are you talking about the "Optimizing System" phase? As far as I know, that updates binary-library prebindings--not fragmentation. You can read more about it here:

    http://developer.apple.com/documentation/Perform an ce/Conceptual/LaunchTime/Tasks/Prebinding.html

    In theory, when you install anything (on any system) and have a reasonable amount of contiguous free space on your disk, the installed files should always be unfragmented since I believe that's what most file systems look for first to allocate: a large chunk of contiguous space.

    Fragmentation typically occurs more when you open a file, increase its size, and write it back out. But operations that write large files to disk that do not know beforehand what the final size may also do this to some files that were only written once to your disk. For example, some of the largest fragmented files on my HFS+ volume are things snagged with BitTorrent. The fragments in these files are very regular chunks of blocks, which could be the typical 'buffer' size BT grabs when writing.
  • by SideshowBob ( 82333 ) on Wednesday May 19, 2004 @02:05PM (#9196928)
    That isn't a filesystem that is a tape. Any number of tape systems exist, pick whichever one you like.
  • by ahknight ( 128958 ) * on Wednesday May 19, 2004 @02:08PM (#9196946)
    Last time I checked filesystems were also operatining system components. Often these components might be referred to as drivers.

    Then you didn't check hard. Again, HFS+ is a specification of how to write data to media in order to organize another collection of data. The implementation is what handles the defragging. There are no drivers involved as drivers are the software component of a hardware/software union and there is no hardware involved at this level (just logical organization).
  • Oh please. (Score:2, Insightful)

    by warrax_666 ( 144623 ) on Wednesday May 19, 2004 @02:24PM (#9197147)
    Which means this isn't a valid test.

    It's a perfectly valid test -- it measures how much fragmentation can be observed after a certain amount of use. According to your logic we couldn't compare any properties of NTFS/ReiserFS/FAT32/HFS+ because they work differently.
  • Re:Huh? (Score:5, Insightful)

    by bfg9000 ( 726447 ) on Wednesday May 19, 2004 @02:34PM (#9197253) Homepage Journal
    (Yes, I run WinXP on my Toshiba laptop -- deal with it.)

    Why would anybody have a problem with you running Windows XP on your laptop? I'm a card-carrying Linux Zealot, and I don't have a problem with it.
  • by ewhac ( 5844 ) on Wednesday May 19, 2004 @02:44PM (#9197330) Homepage Journal

    No, it's just that the defragger built-in to Win2K/XP is shite. Its runs like molasses in liquid helium, and it almost never does a complete job in a single run. You have to run it several times in a row before it's even close to doing a reasonable job. And if it's your system drive, then there are some files (including the swap file) that it simply won't touch no matter how badly the blocks are scattered. This can be a real pain in the posterior if you're trying to defrag a drive in preparation for a Linux install.

    Schwab

  • by Unregistered ( 584479 ) on Wednesday May 19, 2004 @02:52PM (#9197388)
    You need at least 10% free space for windows to keep the fs reasonably defragmented. put some larger hdds in there and it'll work much better
  • by Woody77 ( 118089 ) on Wednesday May 19, 2004 @03:20PM (#9197613)
    I'm a software developer (C++, mostly msdev). As such, I recompile a LOT. With a drive dedicated to mostly source and intermediate files, about 6GB of a 9GB drive, it regularly fragments itself into molasses. It normally takes a while, but it happens. The continual replacement of files just dries NTFS up a wall. This is using both win2k and xp. I end up needed to defrag about once a month.

    The problem really rears it's head once the space between the files isn't big enough for the new files, and then things start getting fragmented in a hurry.

    And does it make a difference? In disk-intensive things like compiling (lots of small modules that get compiled into big binaries/debug symbol files), it really has an impact. The more the heads have to move around, the slower it all gets.

    I also use lots of partitions to isolate things, so that even as it fragments, the fragments don't get too far apart. However, if you do a lot of swapping to disk, the partitions will kill you, unless the swap is on the same partition as where you're working. So I tend to foist the swap off on another drive entirely (all SCSI) so that the seeking is reduced as much as possible.

    Placebo? probably is for most people that aren't continually writing/rewriting to disk. But if you're constantly reading/writing/erasing files, it is useful.

    Now if I could just figure out a way to safely split up my root filesystem on linux to keep the heavily used trees separate from the not-so-heavily used ones (performance experiment). Mainly my /var/tmp/*, which gets used heavily for compiling (gentoo and portage).
  • by michael_cain ( 66650 ) on Wednesday May 19, 2004 @03:24PM (#9197640) Journal
    You must be too young to remember FAT-based systems.

    Youngster. Go back far enough in UNIX and it required PERFECT disk packs to function -- no handling of bad sectors. Of course, those were the days when disk "drives" were the size of a small washing machine, the top opened, and you loaded/unloaded the multi-platter disk pack that was the size of a hat box. Was always interesting to see one of the gurus arrive to troubleshoot your system carrying their own disk pack with their specialized utilities... :^)

  • by itwerx ( 165526 ) on Wednesday May 19, 2004 @04:08PM (#9198067) Homepage
    Pretty bad especially considering I've only had the laptop for less than a month...

    Any new machine will have an image dumped onto the hard-drive by the manufacturer.
    Most imaging apps don't bother with defragmenting so you probably started out with it fairly fragmented from the initial build of the image.
  • by cft_128 ( 650084 ) on Wednesday May 19, 2004 @06:05PM (#9199486)
    How exactly is FAT not a "real" file system? It's still very widely in use, particularly on devices smaller than 2 GB (digital cameras come to mind). It's still useful because it's so simple, well-known, and easy to implement. That makes it real enough to be "real" to most people with a clue.

    OK, I was being a bit snobbish in saying it is not a 'real filesystem', it does have its uses - small devices, floppies, etc. BUT, even when it was originally designed it was considered primitive it had many known flaws, among those: it is very easily fragmented (what we are all talking about, no redundancy to help recover from failure and wastes quite a bit of disk space.

    My comparison between NTFS and FAT is valid because if you are running Windows, those are the only two filesystems you have to choose between. Comparing NTFS with, for instance, ReiserFS is not really interesting because they're not really alternatives to each other. Unless you choose your operating system based on what filesystems it supports...

    The article was comparing HFS+ to NTFS, not windows filesystems. You your self said NTFS deals with fragmentation far better than many other file systems, most notably FAT (emphasis mine) which implies you were not only comparing NTFS to other windows file systems but to many other filesystems. It was the apparent straw man argument that I was pointing out. NTFS is leaps and bounds better than FAT, I 100% agree with you on that. It could be better, and I wish it was open source (last I checked it was not) but is still the best option window users have.

  • by TheNetAvenger ( 624455 ) on Wednesday May 19, 2004 @06:16PM (#9199613)
    And yet not only was NTFS one of the first file systems to assist in preventing fragmentation and the performance issues of fragmented files from the MFT stucture, in Windows 2000 and newer NT platforms also perform defragging while the system is idle, moving files not only to be degragged, but for optimal performance.

    So glad Apple was the innovator here again... Geesh. (except they are still following in the footsteps of the NT team).

    Do a performance anayalsis of files that are even fragmented on NTFS compared to files that are fragmented on HFS+ and you will get part of what I am talking about. Peformance degredation is not as much as an issue with NTFS as it is even with HFS+, read the NTFS whitepapaers.

    Additional, critical speed files (like paging files, user hives, etc are always automatically defragged during login and logoff, in addtion to be processed during idle machine time.)

    The irony is that NT has been doing this for years, and even Win98 had background defrag and file optimization techniques even with FAT32.

    So tell me again about this great HFS+ innovation and how it works so much better at defragging files than NTFS.

    I haven't defragged the laptop I am typing this on now for months, and yet, only a small about of large downloads are the files that are fragmented.

    Apple geeks get a new feature that everyone has had for years, and they think Job invented the wheel.

    Geesh..!!!
  • by WgT2 ( 591074 ) on Wednesday May 19, 2004 @06:21PM (#9199677) Journal

    A word about browsers (and any thing else that requires change):
    People, in general (more than 50% of them), prefer to resist change, and for that matter, extra work and/or thinking. It's just the way they are. It's what explains product loyalty. In this case, the product loyalty is browser based.

    In my job, as a web server support admin, I find that 95%, or more, of the people I speak with in support situations are not even aware of the alternatives available to them. In fact, just last Sunday, a friend of mine was showing off his new Power Book to me (by the way, even though I am a complete Linux advocate, you have to give credit where credit is due: Mac has a great GUI). I had to laugh during his enthusiastic demo of Mac OS X's features when my friend opens up Safari and goes, "Check this out. It's a feature called 'tabbed browsing.'" He was a kid in a candy store and had just found new, profound flavor of buble-gum or something. But, how could I not laugh at this previously 100% Windows user's intron to me of something that I began using in Opera, back around 5.x-6.x (I really don't remember if 5.x had tabs or not. I really don't care since that browser drives me crazy. But that's just me.) Translation: it's be around for years. In my work day I begin with 12-13 of them opening in FireFOx (NT 2000 doesn't like that, even with 512MB RAM, but it gets by well enough). The number of tabs only increase from there, unless there's an accident of closing a tab. But no big deal there either, I just open another one and then drag it back between where I normally would have it in my list of tabs. You won't find any thing like that in a browser direct from MS.

    Another example: my co-workers, particularly the NT techs. Most, certainly not all (thank God), of our NT techs still use IE for their work. I don't really know what they need for their work, but I've seen their desktops and their taskbars; WHAT A MESS! It's beyond me why they would waste their time with a browser (read: IE) that doesn't organize their open web pages into one taskbar entity, because they DO use other programs on the NT 2000 desktop, which we all must use at my job, regardless of the servers we admin for. (If you haven't guessed yet, I don't admin for NT servers, I get the please and ease of admining for Linux boxes. And a big THANK GOD for that!)

    Back to my point: most people are not aware of features in other browsers AND if they are aware of new inovations (read: tab browsing, which is one reason I will never go back to IE) they are not in any hurry to change and think and evaluate something that, however troubling it can be at times: pop-ups, vulnerbilities, "________________" [fill in the blank], lack of inovation, etc.

    So what if most of /. visitors are Windows based? There are plenty of better choices to MS products, even on their own OS platform. But, people the world over resist change; they get stuck in a rut, good or bad in it's results, and they either don't like to change, don't "need" to change, or cannot change. Thus, the end result is resistance to change; for the better or for the worst.

  • Re:Offtopic (Score:3, Insightful)

    by bnenning ( 58349 ) on Wednesday May 19, 2004 @09:06PM (#9200722)
    Tell ya what: FOAD.

    Ah, liberal tolerance rears its head again.

    Bush lied, Bush continues to lie, and our country is far, FAR more in danger now than when we started this stupid fucking war.

    I'd be interested in the metric you use to compute danger, seeing as how there have been exactly zero terrorist attacks on US soil since 9/11. (By the way, were you out protesting the "stupid fucking war" in Serbia, or are Democrats allowed to invade sovereign nations who pose no external threat?)

    Bush said he was 100% certain that Saddam had massive stores of WMD and a nuclear program.

    Is it just barely possible that maybe he really did believe that, and he was mistaken? Intelligence agencies have been known to make mistakes before. Never mind, I forgot he's from Texas and worked in the oil industry so he's obviously made a pact with Satan.

    Oh, but of course, he never ACTUALLY said it. He just IMPLIED it, which makes it all ok, doesn't it? It was a failure of intelligence, which means it ain't his fault! Nothing is his fault! And I mean shit, who needs morals when you're having to deal with them dirty hippie commie faggot libruhls and that libruhl media?

    It's amusing watching you guys get progressively more unhinged. Kerry should be leading Bush by a healthy margin given the Iraq situation and that the economic recovery isn't completely visible yet, but when your talking points are all variations on "Bush is a fascist", you can't expect much from the middle. I'm not a huge fan of Bush, but I'll be enjoying his victory on election night just envisioning the enragement of the left.
  • by dasmegabyte ( 267018 ) <das@OHNOWHATSTHISdasmegabyte.org> on Thursday May 20, 2004 @12:09AM (#9201519) Homepage Journal
    Which is why Apple is such a great company.

    At some companies, a developer would go to his project manager, propose this feature, and get a head shake. Too much work to test and spec, not worth the gains. Let's devote our time to our core competencies.

    Apple on the other hand was built on details like this. In fact, one of my favorite things about OS 10.3 is Expose...a feature nobody really asked for, and now I can't live without it (fuck virtual desktops...I want one desktop I can use!)

The one day you'd sell your soul for something, souls are a glut.

Working...