Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
OS X Businesses Operating Systems Apple

Jordan Hubbard On Next-Generation Packaging 65

GlobalEcho writes: "Developers associated with Darwin are beginning to think about package management and source building. At issue is whether something like dpkg, RPM or *BSD's ports could suffice, or whether they are all just way too mid-90's. Jordan Hubbard himself (now of Apple) weighed in with his opinions (user and passwd 'archives'). Apparently he thinks it is time for something more advanced, and he gives some ideas about what that might look like. Does anyone else have good ideas?"
This discussion has been archived. No new comments can be posted.

Jordan Hubbard On Next-Generation Packaging

Comments Filter:
  • Jordan on packaging? (Score:3, Informative)

    by rtaylor ( 70602 ) on Wednesday February 20, 2002 @07:35PM (#3040838) Homepage
    If I'm not mistaken the whole Ports thing was one of Jordans great inventions. It's succeeded quite well using standard distributed tools (ie. makefiles, compilers, and the like).

    Perhaps I'm wrong. Nice to see he's still having great thoughts. Hope whatever packaging system they come up with is portable enough to work on a large chunk of systems (linux in various configs, bsd's, solaris, darwin, etc.).
  • Is dpkg THAT bad? (Score:3, Interesting)

    by moosesocks ( 264553 ) on Wednesday February 20, 2002 @07:42PM (#3040876) Homepage
    Quite frankly, dpkg isn't all that bad. It has MAJOR issues; no dobut about that, but has many great concepts which can't be found anywhere else (correct me if i'm wrong. they're still good ideas, though!)

    The dependency and dependency resolution system- dpkg has the most advanced dependency system known to unix. No dobut to that... To solve these dependencies, dpkg goes to it's list of package locations (complete with http and ftp locations, cdroms, etc.. if necessary) and grabs the required packages from the net (the user is prompted on this, of course)

    --Easy upgrades. No other system allows me to bring my system up to date in less time (note: debian isn't updated often, so this is generally unappreciated)
    $apt-get update
    $apt-get upgrade
    (hit y to confirm)
    All from the command prompt.

    I'm not sure what else there is that makes it good. But RPM certainly doesn't have these features.

    What it lacks:
    It's buggy as hell - it's easier then signing up for aol to nuke your system this way (in other words, it happens quite often by accident)

    No good front-ends - There is no good program to browse available packages, install them, enter configuration information (more on that in a sec) and remove them. You should enter the package you want to install. a wizard is displayed, it grabs the package from a mirror or local source, solves dependecies, installs it and any dependent packages, configures it, and exits.

    Configuration - dpkg has a system that allows the package to prompt for a few options before it is installed. this is a good thing, but the packages usually don't ask enough. users need full customization (nothing nitpicky. big stuff... so you dont have do manually edit configuration files by hand.

    Available packages - this is where dpkg falls flat on it's face. 95% of unix packages are rpms. that never helps. a unified packaging system needs to be put into place

    i dunno what i forgot?
    • by CentrX ( 50629 )
      To solve these dependencies, dpkg goes to it's list of package locations (complete with http and ftp locations, cdroms, etc.. if necessary) and grabs the required packages from the net (the user is prompted on this, of course)

      No, the dependency satisfaction and easy installation and upgrading is a feature of apt, a frontend to dpkg, not dpkg.

      note: debian isn't updated often, so this is generally unappreciated

      No, if you use the testing or unstable branches, Debian is updated daily. If you stick with stable, the easy downloading and installation is still good for installing new software.

      It's buggy as hell - it's easier then signing up for aol to nuke your system this way (in other words, it happens quite often by accident)

      I disagree. Nothing of this magnitude has ever happened to the systems I've ever administered, and I haven't heard it happen to anyone else. If you use unstable, which means you should be prepared for such occurences, there's the slight possibility of this happening, but that's a problem with the actual software packages, not a problem with dpkg or apt.

      No good front-ends - There is no good program to browse available packages, install them, enter configuration information (more on that in a sec) and remove them. You should enter the package you want to install. a wizard is displayed, it grabs the package from a mirror or local source, solves dependecies, installs it and any dependent packages, configures it, and exits.

      No, dselect, aptitude, deity, are some of the many frontends to dpkg and apt that allow browsing of packages. When using dselect, for instance, you select the packages you want to install an uninstall and go to "Install". It does exactly what you say it doesn't, it grabs the package from a mirror or local source, grabs dependencies, and installs it and any dependent packages. Then, a debconf configuration screen ("wizard") is brought up, in the interface that you've chosen, such as dialog, Gnome, etc., and you can configure it, or it configures itself dependent on the level of interactivity you told it you wanted before. Then, it exits.

      Configuration - dpkg has a system that allows the package to prompt for a few options before it is installed. this is a good thing, but the packages usually don't ask enough. users need full customization (nothing nitpicky. big stuff... so you dont have do manually edit configuration files by hand.

      The packages ask questions based on the level of interactivity you chose when you configured debconf (or depending on a command-line option when you reconfigure the package). "Big stuff" is what's given to you. If you want to configure everything, editing configuration files is the way to go.

      Available packages - this is where dpkg falls flat on it's face. 95% of unix packages are rpms. that never helps. a unified packaging system needs to be put into place

      Frankly, 8600 software packages in one, easily accessible, central repository of stable, well-maintained packages seems a lot to me. Most packages someone would ever want are there, and others, those provided in RPM can be converted by alien to .deb format. Regardless, this has nothing to do with the quality of the packaging format or the packaging tools, so it wouldn't affect this. Any "next-generation" package format would start with no packages, so dpkg beats it at that.

      • I nuked several debian setups this way. I've had apt-get segfaulting on me, removing stuff that I didn't want removed, installing stuff that didn't work. Of course all of this was on testing (so it really deserves that name). Potato is quite stable but completely useless due to the complete lack of packages created in this century (minus some security fixes).

        I'm sure most of this could be fixed by a more experienced debian user. But the whole point is that it shouldn't need to be fixed in the first place.
    • No, it's not. (Score:3, Insightful)

      by V. Mole ( 9567 )

      It's buggy as hell - it's easier then signing up for aol to nuke your system this way (in other words, it happens quite often by accident)

      Huh? I suspect user error. I've been using Debian/dpkg since pre 1.0 days, and I can count the number of times dpkg has had system wrecking errors on one hand. I can count the number of times that it actually wrecked my system on one finger -- after that, I got a little more cautious about upgrading dpkg in the unstable tree. (i.e. wait a few hours and read debian-devel), There are ways to tell dpkg to hose your system, but those aren't bugs, those are options with big nasty warnings next to them.

      Now, there have been many more occurences of buggy packages screwing things up, but that's hardly dpkg's fault. And if you live on unstable, well, that's what you get.

      No good front-ends -

      apt-get install aptitude

      (Not in stable, but coming soon[1] to a release near you.

      Configuration - dpkg has a system that allows the package to prompt for a few options before it is installed. this is a good thing, but the packages usually don't ask enough.

      Again, not a dpkg issue. If the package doesn't provide sufficient configuration flexibility, it's an issue with the particular package.

      Available packages - this is where dpkg falls flat on it's face. 95% of unix packages are rpms. that never helps. a unified packaging system needs to be put into place

      I don't know where you got that statistic. Yes, maybe 95% of packages you see floating around random websites are in rpm, but I doubt that 20 times as many software packages available as rpms vs. debs. Most upstream developers don't provide debs, because there's a debian developer to do so for them; the fact that mozilla.org has rpms but not debs doesn't mean there aren't .debs of Mozilla. (I'll allow that the ratio for non-free software is much worse, for fairly obvious reasons.)

      Steve

      [1] "soon" in Debian terms, at least :-)

    • Re:Is dpkg THAT bad? (Score:4, Informative)

      by moof1138 ( 215921 ) on Wednesday February 20, 2002 @08:40PM (#3041196)
      dpkg is nice, and I have not found it to be as dangerous or as buggy as you, though I have not delved deep in the details. I have been running Debian testing and so far have never managed to do anything awful to it (though maybe I can just count myself lucky). Design aside, it has a fatal flaw which is the licensing. Since it is GPL, Apple has to be cautious about it. While personally I doubt that there really is an issue with infecting the whole system, since NeXT and Apple have both suffered the wrath of the FSF's attacks (FSF sued Next over not releaing ObjC changes to gcc, and Stallmans rants about porting GNU software to A/UX seemed downright hostile) I can see why their lawyers are cautious. Plus Apple has a lot of IP on the line that they really do need to protect, since they could get sued by their shareholders, even if legal did not balk.

      BTW - You can actually use dpkg already with Mac OS X by installing fink [sourceforge.net] though it is an external project. It works well, has a fair number of packages. I use it and highly recommend it.

    • The dependency and dependency resolution system- dpkg has the most advanced dependency system known to unix. No dobut to that... To solve these dependencies, dpkg goes to it's list of package locations (complete with http and ftp locations, cdroms, etc.. if necessary) and grabs the required packages from the net (the user is prompted on this, of course)

      Dpkg doesn't do that. APT does. Its not that these aren't great and massively useful features, its just that just like urpmi, APT works its magic across RPM (the LSB standard packaging system, which many app packagers primarily package for) too. I maintain an apt repository for my workplace with around 3000 packages, all in RPM format for Red Hat 7.2, and it works like a charm. Debian's policies are postable too - Connectiva has a similar set of guidelines. The unique advantages of Dpkg is suggested / recommended dependencies, something RPM desperately needs (Red Hat themselves cheat and use the `comps' file to provide this logic themselves in the installer, but us users don't have the luxury). RPM has some unique advantages too tho - transaction handling in the database (thanks to DB3) does wonders for my piece of mind. In the end, I'll stick with RPM and Apt-get because of the LSB stuff, and avaliability, but I do hope they add suggested / recommended dependencies soon.

      Actually, if you want to see a system which kicks both their arse in many ways, look at QNX. They have apt-like features, a nifty `package filesystem', and a GUI installer that reallycraps all over every other software install system I've ever seen.
    • Re:Is dpkg THAT bad? (Score:2, Informative)

      by ashtonb ( 240268 )
      The dependency and dependency resolution system- dpkg has the most advanced dependency system known to unix. No dobut to that... To solve these dependencies, dpkg goes to it's list of package locations (complete with http and ftp locations, cdroms, etc.. if necessary) and grabs the required packages from the net (the user is prompted on this, of course)

      Umm, apt that does all that.

      (note: debian isn't updated often, so this is generally unappreciated)

      Debian is being constantly updated. If you are refering to the 'potato/stable' branch, then it in rarely updated. But 'woody' and 'sid' are being updated very often.

      I'm not sure what else there is that makes it good. But RPM certainly doesn't have these features.

      There is a version of apt-get for rpms that had recently been released. Not quite at the debian level, but still better than nothing.

      It's buggy as hell - it's easier then signing up for aol to nuke your system this way (in other words, it happens quite often by accident)

      I've never experienced any bug's in either apt or dpkg, though I've never used the sid/unstable branch. (And if I had used sid/unstable, I would have no right to complain about bugs, just to report them on bugs.debian.org [debian.org]).

      No good front-ends - There is no good program to browse available packages, install them, enter configuration information (more on that in a sec) and remove them. You should enter the package you want to install. a wizard is displayed, it grabs the package from a mirror or local source, solves dependecies, installs it and any dependent packages, configures it, and exits.

      Aptitude, Deity? What is wrong with them? (Make sure you get them from woody)

      Configuration - dpkg has a system that allows the package to prompt for a few options before it is installed. this is a good thing, but the packages usually don't ask enough. users need full customization (nothing nitpicky. big stuff... so you dont have do manually edit configuration files by hand

      If every package asked many questions then you would never finish an install. Anyway, debian lets you choose the amount and importance of the questions you are asked. Thats what debconf is all about. And compare it to redhat, where you aren't asked any package specific questions during install.

      Available packages - this is where dpkg falls flat on it's face. 95% of unix packages are rpms. that never helps. a unified packaging system needs to be put into place.

      Woody has 8602 packages atm. Debian does have a unified packaging system. All packages use the same package format, are found, retrieved, and installed using the same applications, and are kept together in a central location ( ftp.debian.org [debian.org] - and its mirrors). Nothing else compares.

      i dunno what i forgot?

      I dunno, what other debian fallacies can you think up.

  • Doesnt OS X only run on G[3,4,5] processors any way?
    Wasnt the hole point of fere bsd ports section so that i could have a version optimized for my arch?
    If so wouldnt binary versions suffice eg apt?
    i like apt but would love to have th eoption to have stuff build for my box ony but dont see the point for a platform where the processors are more similar. Or am i missing out on something?
    Is a bin optimized a G3 different from G4 ?
    • Re:speed gains (Score:2, Informative)

      by memoryhole ( 3233 )
      Absolutely! sorta.

      The G4 differs from the G3 in a lot of piddly little ways, and one great big, huge way. The G4 has the AltiVec engine (a vector computation unit, alongside the Integer and Floating Point Units). However, it's a little specialized and (afaik) there aren't any compilers that automatically generate AltiVec-optimized code. On the other hand, programmers can use AltiVec (vector engine) accellerated functions, which will then be much faster on G4's than on G3's.

      As an example of an AltiVec accellerated function, look up the man page for writev(). Basically, it's a vectorized (and thus accellerated) version of write().
      • While it is conceivable that writev might be
        implemented using some altivec insns on G>=4,
        writev really doesn't have anything to do with
        altivec. writev is an optimization introduced
        in the pre-BSD days, to allow one to make several
        writes with just one syscall. The 'v' in writev
        is the vector of regions to write in the file.
      • Writev has nothing to do with altivec really.
        It is an ancient unix optimization in which a
        number of write(2) operations are combined in
        order to save on syscalls. It's like an old
        vector machine's scatter/gather, but for memory-disk
        operations instead of register-memory operations.
        One could conceivably use some altivec insns in the
        implementation of writev, but one could do that for
        any syscall implementation, really, so it's not
        directly bearing.
  • Idea seems nice (Score:2, Insightful)

    by I_redwolf ( 51890 )
    But in practice I just don't see how it's gonna be any different in the long wrong. I mean with the xml you'll be able to do alot more stuff, a simple database etc it starts to get a little bit big for a porting system. How long until this becomes obsolete because the database is too big to search effectively.. I like it but i just think that it's implementation is gonna be the hard part. Then again implementation is always the hard part.
  • We see a system, and our first impression (as Open Source freaks) is: I can change this here, tweak that a bit, and improve the system.

    This is great on a open system that can be changed by anyone at any time, and if someone screws up and a feature get ruined, it can be easily changed before it is really catastrophic.

    On a production OS that will be used by people who don't know what a CPU is, and can't tell .mp3's from .tar.gz's, a slightly different paradigm is needed. There are worse ideas than keeping a tried and true system. Why not just keep the old system that seems to work well? There are misfeatures, but that will always be true, and at least they know what they are, whereas a new system could had hidden problems.

    Apparently he thinks it is time for something more advanced, and he gives some ideas about what that might look like. Does anyone else have good ideas?"


    He doesn't really say this, he suggests modifications for his protocol, including:

    Meta data in xml files instead of the makefile

    Front-end development facilitation

    Use a database to keep the data to keep the descriptions to make it easier to use

    All of these are manageable to implement seperately on FreeBSD or wherever before putting them into a new OS, where a screwup would be a huge disaster.
  • The Arusha Project (Score:4, Informative)

    by hoggy ( 10971 ) on Wednesday February 20, 2002 @08:20PM (#3041106) Homepage Journal
    Just to trump a project I'm involved in, The Arusha Project [sourceforge.net] has some similar ideas to Jordan.

    We use a simple XML file-based (i.e., you can edit everything with vi) object-oriented database. The project isn't just about package management, but we implemented a full multi-platform build-from-source-and-install-sitewide package management tool. It also handles dependencies etc.

  • by ChaosMt ( 84630 ) on Wednesday February 20, 2002 @08:29PM (#3041151) Homepage
    I'm happy that someone so capable is think about this. However, bsd ports and package systems are quite good already - lean and mean. On my OpenBSD box it is quite simple. Either pkg_add ftp://url/package_file or env FLAVORS="option1 option2" make install. Elegant, simple, lightweight and powerful. Yesterday I did a big php build with a BUNCH of dependancies and sub dependancies - and it handled them all beautifully. A round of applause for OpenBSD and the port maintainers, please!

    What I would hate to see are any major revisions if it's just gonna add some feature; I would rather see that time spent on developing the ports and packages themselves. Make is a good, simple, foundational and almost always present solution. Adding other languages would be a waste of time IMHO.

    Let me condense what I think should be pursued from the ports perspective: documentation and ease of use. One can always make readmes and get mini-descirptions, but that really should be expanded upon, both for beginners and seasoned users who just don't know what that software is about. It would be nice to have some options like, info that would go thought the ports tree and build more verbose information. If those documents are built in a consistant manner (such as xml), then any ol' front end can be built to pull the info on the port and automate building the port and the flavors available. For example, a simple curses interface that will inform you of the dependancies that will need to be built first, estimates the size, and gives you a list of flavors to add into your build. Hit ok and it monitors the progress for you, logs the process and keeps the messages out of sight (from those who get scared easy).

    I agree that something should be done to be able to automagically build a package from a port. I think this area would be the best to pursue. Even better, if we bsd types could get a system like checkinstall /installwatch [asic-linux.com.mx] consistantly, not most of the time, but consistantly working on BSD. This project essentially is a wrapper script that records everything make install does. In current form, it gives you the option of building an RPM from that make install. What should be pursued is making this work -well- on bsd, with the option to build a package along with documenting it's dependancies and/or recording the install info into the existing system to that all one has to do to remove what you just built is 'pkg_delete'. THAT would be cool!!

  • OpenPackages (Score:3, Informative)

    by Anonymous Coward on Wednesday February 20, 2002 @08:38PM (#3041191)
    None of these ideas are new. We at the OpenPackages project have already discussed these ideas and more. We have a pretty solid plan, but unfortunately no one seems to have the time to turn the proposals into a design document and get going. Like Jordan said in his post and i said in my follow-up, time is the critical issue. This is a big job.

    Here are some references i included in my darwin-devel post:

    http://openpackages.org/html/pkg_design.php
    htt p://openpackages.org/pipermail/op-tech/2001-Apr il/000764.html
    http://openpackages.org/pipermail/ op-tech/2001-May /000826.html
    http://openpackages.org/pipermail/op -tech/2001-Dec ember/001454.html
  • by voisine ( 153062 ) on Wednesday February 20, 2002 @08:39PM (#3041194)
    Check out the fink project

    http://fink.sourceforge.net/ [sourceforge.net]

    600+ OS X ports so far, automatic updates,
    database indexing, built on top of dpkg.
    • Hmm. Not to discount fink--I use it myself--but I had the impression that that's the sort of system that Jordan was speaking against. It's decidedly "first generation" (as one poster later in the darwin-dev thread distinguished).

      Jordan was calling for more advanced internals (XML-based index, separation of data from control from execution engine, etc.), and not just smooth functionality, which has evolved to a good point in existing systems. Truth is, fink is very inflexible (e.g. little choice in install directory), offers very limited individual configuration options, and has its data entwined with its execution engine.

      What Jordan suggests may even imply a step back in functionality at first, but I do believe it's the way forward, long-term.
  • Many unix users seem to think existing solutions are great. You know, I've taken to Cocoa since I got my Mac last month, but there is one thing that drives me crazy and thats figuring out this freaking package manager. Its not easy for people use to using Windows Setup or other installation packages for Windows development tools.

    The problem is UNIX isn't designed for the average user. When you look at it from a Unix perspective, it works great. When you look at the rest of the world, it doesnt. And anything thats a single platform does not create a big issue. You won't see it used on all your open-source console apps, mainly MacOS X applications. This is definately needed in Mac OS X.
    • You can wail on about this average user but you must be careful about this cliche. Because the implicit fallacies is that there is an average user of these systems, that they have far less experience than you or I do, and that they aren't already happy with what they are using now. If there is anything more certain that can be said at all about these average computer users is that they probably don't want to change operating systems right now. In fact, not only would there quite possible be no reason for them to switch operating systems right now but it would mean erasing the skills they have learned using the current operating system. Its no coincidence that those who do migrate to other platforms have little to loose. If you think that these so-called average users spend all their time surfing web pages and sending email then there would probably be a lot more migrators to other operating systems.

      I think it is useful to consider a few things I learned in a Cultural Geography class last semester. I know these things are pretty much common sense but I think its not only useful to consider these ideas but to introduce some new terms when dealing with these things (rather than using impoverishments like average user). When people migrate from one region of a country to another, they do so for a number of reasons. There are push factors and pull factors.

      So why would someone move to a Free operating system? It seems that freedom itself isn't much of pull factor (but this would change, surely, once many of these software laws and licenses are really enforced against end users, not just distributors).

      Let me say something about ease-of-use. While it would seem to be an obvious pull factor--the days of easy to use general-purpose operating systems are long over, I think. Perhaps the first Macintoshes were among the easiest systems to use and the reason for this is quite simple. The needs and expectations of users have gone up quite a bit since then. While I have never used these early computers nor do I know the intentions of the Apple staff (these things are probably clearly documented somewhere...I'm too lazy to look right now), I would suspect that they were trying to make as easy as possible to type out documents with relatively sophisticated typessetting (compared to typewriters!) and then to file these documents into a filing system.

      Today's systems are expected to require quite a bit more. Many of the posters here on slashdot carrying-on on what these operating systems need to be successful (in what ever definition of success, most do not say) give examples:

      • Easy to use GUI
      • Nice aesthetics (theming, skins)
      • Compatibility for all the Xs, Ys, and Zs
      • High Performance...it seems that it isn't important that the system performs faster but rather that the interface is responsive
      • Stability...the system should never crash or rarely crash
      • Device support...it has to support everything or it isn't any good
      • Not made by an overwhelming tyrant
      • Licensed under a correct license...also, even with all of the above, the OS must not cost a penny
      • A nice web browser. It must support all the standards and be better than any other browser on any other platform
      • It must have all the applications that slashdotters believe that this average user spends all his or her time doing. Whether this an office suite or a PhotoShop clone or 3D games depends upon the slashdotters mood, the time of day, and the phase of the moon.

      The paradox is that this average user needs all of this. This seems extremely unfair to anyone trying to implement an operating system...nowadays to any team of programmers or consortium of developers contributing to an existing free software project.

      Now lets consider what relevent about talking about the average user. Like I said before, I doubt this user would switch his or her operating system for any reason. This is because for everyday this user masters his/her OS, the push factor from every other OS becomes stronger and stronger. Unless there exists a push factor from the OS he is currently using, he's gonna stay.

      So lets forget this average user since it isn't relevent or even interesting. Lets consider, instead, a different class of users. Lets just create a class of users who might or will definitely switch to another operating system. Now, awaiting to be smacked around with a stack of statistics proclaiming otherwise, I would guess that this set of users would have the following things more or less in common:

      • More experienced with computers
      • More likely to be computer literate, even more likely to be a power user
      • Is curious what else is out there
      • Finds himself reading up on operating systems on the internet
      • Condenses each OS into a strict list of must have features.

      And where would you find this average set of OS migrators? Probably on the internet: in newsgroups and web forums. Specifically, you would find that many of these people read and post to slashdot regularly.

      And thats the point to this entire post. I find it interesting to hear slashdotters condemn the intelligence of the average users, how they can't program, or they can't figure out the command line. This might be true, but they are revealing their own experiences more than anything else. They are their own breed of software users.

      In conclusion, You Are the Average User.

  • Jordan, I've been using FreeBSD since about '94. I can't really remember it being without ports. It shouldn't be to hard for a hacker of your class to make a GUI front end for it. Or at least some of the Apple hackers. Hell, I wish I could, but I've given up on anything more complex than a PHP/database driven website. All I know is, the FreeBSD style ports system has saved my ass many, many times. Thanks for all the cool stuff!
    • There is one bit in your post I thought about too for quite some time:
      A GUI FrontEnd for the ports collection etc. would be great ! Especially for the folks coming from point-and-click-MacOS 9.
      Having used Fink for a while now, I have to admit that it is indeed pretty cool, but I can tell you that a *lot* of people I know that are migrating from MacOS 9 to X right now won't touch anything like it at least for years.

      Then again, they may not be exactly *the* target audience, but hey, more users is always a nice thing.

      So when Jordan and his "helping hands" are building the next-generation ports collection basically from scratch, it would be nice to consider an optional graphical FronEnd for it too !
    • A GUI front end does exist and it is quite easy to use. Check out this article by Dru Lavigne.
      A Look Through the Ports Collection [onlamp.com]
  • by bug1 ( 96678 )
    Hurbbard states
    "1. All the descriptive text which current goes into a Makefile like the
    port's name, version, dependencies, URL for source bits, and so on,
    should go into an XML file. This immediately allows the port to be
    indexed, documented and modified from automated tools which traverse the
    "ports collection" (sorry, I have to keep using that terminology since
    it's what I'm most familiar with :) for any number of potentially
    interesting reasons. The reason a lot of these tools don't exist for
    *BSD today is that extracting data from Makefiles generally sucks from a
    parsing perspective so people aren't encouraged to get too creative."

    To reprase.
    Metadata should be in XML, this makes it much easier to process than if it has to be extracted from Makefiles

    But why does it have to be XML (or SGML or RDF for that matter). The only reason i can think of is that there are pre-existing tools to parse it, the tradeoff is that any Markup language will bloat your metadata and make it unreadable.

    Shouldnt a packaging system be important enough to warrant the development of its own parsing routines. i.e. just design it to be the best it can be

    • I'm sorry, this post makes absolutely no sense. Why is it better to spend the effort designing a custom parser when reliable, open-source XML parsers are a dime a dozen? Why is XML (for which parsers can be integrated into just about anything) worse than make (which depends on a single utility available virtually nowhere outside the command line)? XML can be made human-readable if necessary. And finally, if your package is so small that adding excess metadata will make it unreasonably large, who really gives a shit?
  • I know that the bundles used for the upper layers of MacOS rely on the Cocoa (NeXTStep) libraries, but couldn't something similar be done for Darwin? At least that way packages would be compatible between teh Desktop and CLI versions of the OS.
  • Is http://www.openpackages.org/ dead? why did it die?
  • I have been using Gentoo Linux for a while now... It includes a package management system called Portage. Portage is basically a reimplementation of the BSD Ports system. Its goal is to become much more powerfull than ports ever was.

    http://www.gentoo.org

    I have to say that portage is by far the easist package management system I have used so far.

    One nice thing about gentoo btw, is that the entire distrib builds it self when you install, downloading the latest packages from the net... So your installation is always current.

    Its also incredibly easy to customize.

    You start off with a very basic system... Then say you want kde? you would type "emerge kdebase/kde"

    and portage will take care of downloading, compiling and installing everything you need (inc xwindows) to have a working kde desktop.

    ps: Anyone else notice that slashdot is now 'apple' candy like? LOL ERaa.. when did that happen?
    • Portage is basically a pythonized BSD ports system. To the end user there is no advantage in one over the other. To the package developer it comes down to make/sh versus python.

      • You obviously havn't looked at portage or tried to use it. Its just a wee tad more powerful then BSD ports, but don't take my word for it ;)

        I have tried looking through your site and downloading some sample code and looked through it. All I see about your project is some "sample xml" and lots of comments about "sysadmins working together" and "value add".

        Quite frankly all I see is some XML with little actual content (XML for the sake of XML?) and alot of hotair. Perhaps you could point me to a working example or a howto showing a sysadmin how to build a working production system using your ... erm... technology? :)

        (BTW, if you havn't I'd recommend playing with portage... you can even give it a whirl on an existing linux install, just chroot it per the instructions. it might give you a few ideas :)

      • Whoops... I am sorry... I got your name mixed up
        with someone else. So ignore the stuff about "the other project". I would still recommend checking out portage though because it is more then just a rewrite of ports with python :)

        Sigh... not a good night for me :(
        • Thanks for clarifying. I was seriously confused for a bit :-)

          I've tried to try to use portage, but I never could get Gentoo installed. I've built LFS before, but my problems stemmed from unstable code and my lack of willingness to go in and fix the scripts and stuff so it would build the way it was supposed to. This was rc6 I believe. I'm going to wait until the final release and try it again then.

          In the meantime, reading over the portage information, I can see where it may be a bit more beneficial than ports for the packager, but I still can't see where it's advantageous to the end user. The "profile" sounds nice, but is it that much different from editing the excellently commented make.conf? The basic official ports system doesn't do upgrades well, but using the portupgrade package, it's a snap. I expect it to become an official tool in FreeBSD 5.0.
      • I have used the FreeBSD ports system for quite a while, I am using fink on Mac OS X and I have a Gentoo Linux machine now up and running for a couple of weeks. I never created a FreeBSD port myself, I have created 2 fink packages and I have already created dozens of Gentoo's ebuild packages. Gentoo's Portage system does not solve the multi platform problem, but IMHO it's the nicest packaging system I have used so far, both in terms of administration and in building packages.
  • Jordan said "I think 10,000 entries is going to be something of a stretch even for the FreeBSD ports team, but I don't see that number being entirely improbable for some Macintosh equivalent since there are a lot more Macfolk than there are FreeBSD users." This comment tickles at a core problem with the transition from OS9 to OSX for many Mac users. There may be more Macfolk than FreeBSD users, but how many of them want to rethink package utilities? A lot of Jordan's comments are good suggestions (for instance, implementing the new standard of XML into the package process), but it seems that this problem will be tackled by *nix users like himself moving over to the OSX platform, and not by OS9 users delving into *nix for their first time. Still, it IS a good time to rethink things before momentum makes it difficult to change. I also agree with him that Apple should have spearheaded the process, and they probably will in time. Apple has always focused on userfriendlyness first, then streamlined what was under the hood last. Compared to the past, Apple seems to be responsibly balancing the development of OSX across the board. However, if OSX is to succeed with the typical Mac audience, it will need to be a lot more stable and userfriendly to the simplest Mac user. I can't imagine my parents enabling their root account to reconfigure folder permissions as I had to do recently. Heck, my dad can barely figure out his email. LOL, nevermind launch a shell and interface via CLI. I imagine streamlining the package utility mechanism is low on Apple's priorities. Still, it's comforting that Jordan is mulling the problem over.
    • (snip) A lot of Jordan's comments are good suggestions (for instance, implementing the new standard of XML into the package process), but it seems that this problem will be tackled by *nix users like himself moving over to the OSX platform, and not by OS9 users delving into *nix for their first time. Still, it IS a good time to rethink things before momentum makes it difficult to change. I also agree with him that Apple should have spearheaded the process, and they probably will in time.(/snip)

      Don't forget, Jordan Hubbard now works for Apple. So, technically, Apple IS spearheading this right now.
    • I can't imagine my parents enabling their root account to reconfigure folder permissions as I had to do recently. Heck, my dad can barely figure out his email.

      I can't imagine your dad having to enable root or reconfigure folder permissions. You didn't have to enable root to do that either - Apple wants you to use sudo for that kind of thing rather than enable root.

      That being said, Apple really should integrate these things in the GUI even if the average user will almost never run into the problem. I don't have the most up-do-date version but last time I looked I had to change permissions from the command line. There should be a GUI way to do it (from get info perhaps?) and EVERY time you try to do something you don't have sufficient permissions to do a dialog box should pop up to enter your administrator username and password so you can do it anyway.
      • >and EVERY time you try to do something you don't have sufficient permissions to do a dialog box should pop up to enter your administrator username and password so you can do it anyway. I agree with you. When I was setting up a net work between my associates computer and mine a while ago (something I only needed when we got together), I copied new permissions (in OS9) to all the folders on the drive that I wanted locked out of the network. This seemed like a good idea at the time but later proved disasterous. Not only did my Mac take 20 minutes to turn file sharing on everytime I booted, but OSX was seriously messed up. I couldn't even launch the classic layer for lack of permissions. OS9 had reconfigured OSX's folder permissions.

        A few calls later to tech friends and I learned that Apple's filesharing wasn't implemented the best way. My Mac was checking each and every unique directory every time it booted when turning on file sharing. So I reinstalled OS9 and just used a shared folder. Now file sharing turns on in seconds. I still need to reinstall OSX. Enabling root and altering permissoins didn't fix the problem.

        With that history in mind, I can't imagine I'm the first person to copy new permissions across a drive. OS9 shouldn't affect OSX that way. I figured things out quickly and fixed them, but I'm new to OSX and "sudo" isn't in my vocabulary yet. It's not like OSX comes with any docs either. It's just not ready for primetime.

        The point of all this, and how it relates to Jordan's comments, is that the typical Mac user is NOT a *nix user. Apple's trying to sell to two different markets. It's a tricky balancing act, IMO. While the engine under the hood needs to be suped up to the max and fully featured to excite *nix users, the dashboard needs to be accessible to people like me who are new, and worse, people like my parents who just point and click and expect things to work without knowing to sudo instead of enabling root. :)

        D

  • Good package management is really going to help in competing with the status quo (ahem...Microsoft) and helping open source projects gain mainstream acceptance. If installing software from packages were as easy as everything else on the Mac, then the advantages of the system would sell itself and gain many Windows converts.

    Open source software continues to improve by leaps and bounds. Unfortunately, the uninitiated tend to be terribly confused about how to get it, install it (including installing dependencies etc), update and maintain it. I think the problem stems from a few things including:
    - The somewhat archane nature of open source software and unix...keeping out those who aren't in on the joke so-to-speak.
    And, more importantly,
    - Competing package management systems
    I think competition is a great thing. I think it is far to late to advocate the abandoning of the existing ports collection, RPM, deb, etc systems because they are entrenched. But the end result is that a project must have someone who does packaging for all these systems if they want full exposure.

    This is why I think we need something like a package repository run by volunteers who create, maintain and store the packages. If it were a single system (distrbuted across many servers around the world of course) then you could have a single, standard interface for searching and downloading packages. This way if someone knows how to use the system on Red Hat, they'll know how to use it on OS X.

    Eventually, a dominant package format may emerge. Hubbard has some good suggestions (XML descriptions is a no brainer). Ultimately, the ease of use of this stuff is key. You want developers to be able to serve all package communities with as little effort as possible, and you want the users to have the most comprehensive access to packages available.

    Once this happens, people buying Macs or installing Linux will have a huge advantage over their Windows mired breathren because once they log in they'll be a couple of clicks away from thousands of ever evolving applications. While Windows users will be thousands of dollars away from a couple of applications.
  • I, personally, think the FreeBSD Packages/Port system is damn near perfect from a user/sysadmin standpoint.

    I've tried fink, and it seem terribly unintuitive and clunky to me, and I don't really use it, prefering to get packages from macosx.forked.net, because they just install, and I don't have to screw with funky menus that don't work simply, and I don't have to hunt for fink packages that aren't in my menu, even though I grabbed the latest list.

    Sure, I could spend the time to figure fink out and get it working properly. Hell, I could be using netcat to write this comment, but I'm not going to.

    People who actually need to get stuff done, need intuitive and simple tools. Fink and the like are fine for home hackers/users, but when you need to get stuff done, fink doesn't cut it, I'm sorry.

    FreeBSD Ports/Packages is very simple, and easy to use, and gets the job done well. It's probably the main reason I use FreeBSD servers (aside from the insane levels of stability).

    I can understand Jordan wanting to move above and beyond, I just ask that he keep things simple and intuitive. Something I can get around in with a small, half-page cheat sheet.
    • Fink works really well. It doesn't take much time at all to figure out, escpecially if you have used dpkg or dselect. I've never really used FreeBSD's port system though, but in all honesty I've not really had any problems with fink. It's done everything that I've needed it to do.
    • Correct me if I'm wrong -- and I may very well be, as I haven't had the pleasure of using Fink -- but as it is based on apt/dpkg, why should you have to use menus at all?

      To elaborate somewhat, I understand that Fink supports apt-get in all its glory, and as such apt-get install (perhaps prefaced by an apt-get update to get the most recent listing) should theoretically install any package supported. Moreover, apt-cache search and related commands would be invaluable at discovering the names of packages (sometimes they aren't immediately obvious).

      On a side note, I use OpenBSD for my firewall and let me simply say: I love the ports tree. Really a stroke of genius on Jordan's part.

      However, the simple truth is that the average user we're all fond of blabbering about simply isn't equipped to deal with compilation as a concept. Sure, it may just be a simple 'make' command to us, but the idea of compilation is necessarily foreign and scary to the novice who simply wants some extra software.

      And what exactly is the point of having each end-user compile his own packages? MacOS X only runs on Macintosh hardware, so cross-architectural support (which is what Ports was designed for) is sort of moot. Binary packages install more quickly, and remove any possible problems in compilation (since you needn't compile).

  • I think that a moderna packaging system should have some kind of subpackages. So If you have a package for GIMP for example, you should be able to choose which localizations you want to install for example.

    Basically some parts of the package should be optional.

    Of course this could be extended to binaries as well. So you can have a single package for several platforms, the package installer chooses the binary for the correct platform.
    • I second the motion!

      Subpackages and package variations are two major things missing from all the candidate systems. I mean, Windows can manage to do this, why can't we? Why can Windows users select "custom" during install but we have to take whatever the packager decides to give us?

      One example: Dia. Under FreeBSD the port and package requires Gnome. But Dia will build just fine without the Gnome libraries. Everytime I want to install or upgrade Dia I have to go in and edit the Makefile.

      Proposal 1A: All meaningful configuration options should be easily available to the user of ports and any successor to ports. All major configuration options should be easily available for packages. Let me choose "custom->without-gnome" when I install Dia.

      Another example: KDE. This already exists as a meta package in FreeBSD, but a metapackage is not the same as a set of subpackages. Uninstall KDE and nothing actually gets uninstalled, as all the dependencies are still there. At an even finer grain of detail, maybe I don't want to install everything in kdegames. Maybe I just want shisen-sho and patience. Allow me a way to install just what I want out of a package. The Debian way of splitting packages into smaller packages (instead of subpackages) is not an optimal solution.

      Proposal 1B: A port or package that contains optional or secondary components should allow the user to choose what components they want installed.

      The defaults should remain for obvious reasons. But the package manager should allow -full, minimal, and -custom installs. For a CLI installer, these could be switches (with -full being the default), and for GUI installers they should be selectable options. For some packages it won't make sense to have more than one option, but for most it will.

      These proposals may mean the replacement of metapackages with superpackages. It will mean more work for the package developers, but that's outweighed by the benefits to the user.
    • While this would theoretically be good, I fear that it isn't really feasible. Why?

      Because most open-source programs were not designed this way. Whether a library dependancy is compiled in or not changes the resulting binary; this means that, say, nethack (compiled with x11 support) and nethack (compiled without) are fundamentally different binaries.

      As such, these two would have to be different executables, of similar size. Including both in one package at download time would of course waste bandwidth.

      But the situation gets exponentially worse. Consider a package which has n options (all independant of each other, for the sake of computation). Then we have 2^n different package configurations! Clearly, building each and placing them all in the same package to allow users to choose would eat bandwidth unacceptably. And we haven't even added all the different architectures (although on MacOS X there is admittedly only one.)

      So let's say instead that each of these packages exists somewhere, and the package you have simply references them, and (at your discretion) asks you which options you want compiled (or else picks the package best supported by your system), and then downloads the appropriate version.

      How is this really any different from our current setup, where we instead have several versions of the same package for commonly requested options?

      • I agree with you to a certain extent. But there are a lot of other things than binaries that could be subpackages. Localisations, extra icons, extra levels(in a game), etc...

        But one solution to the problem with giant packages where only a small part is used is the one used in the NeXTstep installer packages. In the info file you can define a URL that the actual files should be downloaded from. So the Package includes all info about it, dependenciesm size, version, an icon and so on. Except the files.
        The NeXT packages was limited to downloading the whole package from one url. But there's nothing saying that it wouldn't be possible to have each subpackage as different downloads. In fact that is getting more and more common on windows, one example is Mozilla. But then there is also a mozilla package with everything included, so you don't have to download anything.

        /Erik
  • I think for a next generation packaging system Apple ought to adopt concepts from their frameworks method of binary packaging. A package would contain an XML header relaying to the installation program information like the file contents, checksums, a list of required and a list of optional packages, install and make instructions (including file destinations), a hash of the compressed file, and a dependancy list. The installer program would scan the header and ask the user which parts they wanted to install or install options based on arguments like installprogram -a packagename would install the entire package or some such. Since there's a hash of the compressed file it could be checked against a hash stored on the server the package came from for confirmation, from there the installer would figure out using information of the header whether it had to just cp and chmod some files or if it had to compile them from source. Packages could come in source only or binary only or in combination packages containing both. The frameworks the installation fromworks would output to (or just plain directories) could contain a header telling which files exist on the drive and a master lister of installations could be kept by the installation program. So you could whip it open and remove GNOME lets say and it would consult a relatively small XML file telling it where the GNOME XML resource file is and then procede to uninstall the files listed there. The XML header for the package could be even compressed inside the file and follow a standard naming convention so the install program could grab the header out of the compressed file in order to do the rest of the work, this would cut down space needed for the XML file since a good majority of headers are being repeated several times and just taking up space.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...