Jordan Hubbard On Next-Generation Packaging 65
GlobalEcho writes: "Developers associated with Darwin are beginning to think about package management and source building. At issue is whether something like dpkg, RPM or *BSD's ports could suffice, or whether they are all just way too mid-90's. Jordan Hubbard himself (now of Apple) weighed in with his opinions (user and passwd 'archives'). Apparently he thinks it is time for something more advanced, and he gives some ideas about what that might look like. Does anyone else have good ideas?"
Jordan on packaging? (Score:3, Informative)
Perhaps I'm wrong. Nice to see he's still having great thoughts. Hope whatever packaging system they come up with is portable enough to work on a large chunk of systems (linux in various configs, bsd's, solaris, darwin, etc.).
Re:Jordan on packaging? (Score:2)
Is dpkg THAT bad? (Score:3, Interesting)
The dependency and dependency resolution system- dpkg has the most advanced dependency system known to unix. No dobut to that... To solve these dependencies, dpkg goes to it's list of package locations (complete with http and ftp locations, cdroms, etc.. if necessary) and grabs the required packages from the net (the user is prompted on this, of course)
--Easy upgrades. No other system allows me to bring my system up to date in less time (note: debian isn't updated often, so this is generally unappreciated)
$apt-get update
$apt-get upgrade
(hit y to confirm)
All from the command prompt.
I'm not sure what else there is that makes it good. But RPM certainly doesn't have these features.
What it lacks:
It's buggy as hell - it's easier then signing up for aol to nuke your system this way (in other words, it happens quite often by accident)
No good front-ends - There is no good program to browse available packages, install them, enter configuration information (more on that in a sec) and remove them. You should enter the package you want to install. a wizard is displayed, it grabs the package from a mirror or local source, solves dependecies, installs it and any dependent packages, configures it, and exits.
Configuration - dpkg has a system that allows the package to prompt for a few options before it is installed. this is a good thing, but the packages usually don't ask enough. users need full customization (nothing nitpicky. big stuff... so you dont have do manually edit configuration files by hand.
Available packages - this is where dpkg falls flat on it's face. 95% of unix packages are rpms. that never helps. a unified packaging system needs to be put into place
i dunno what i forgot?
Re:Is dpkg THAT bad? (Score:3, Insightful)
No, the dependency satisfaction and easy installation and upgrading is a feature of apt, a frontend to dpkg, not dpkg.
note: debian isn't updated often, so this is generally unappreciatedNo, if you use the testing or unstable branches, Debian is updated daily. If you stick with stable, the easy downloading and installation is still good for installing new software.
It's buggy as hell - it's easier then signing up for aol to nuke your system this way (in other words, it happens quite often by accident)I disagree. Nothing of this magnitude has ever happened to the systems I've ever administered, and I haven't heard it happen to anyone else. If you use unstable, which means you should be prepared for such occurences, there's the slight possibility of this happening, but that's a problem with the actual software packages, not a problem with dpkg or apt.
No good front-ends - There is no good program to browse available packages, install them, enter configuration information (more on that in a sec) and remove them. You should enter the package you want to install. a wizard is displayed, it grabs the package from a mirror or local source, solves dependecies, installs it and any dependent packages, configures it, and exits.No, dselect, aptitude, deity, are some of the many frontends to dpkg and apt that allow browsing of packages. When using dselect, for instance, you select the packages you want to install an uninstall and go to "Install". It does exactly what you say it doesn't, it grabs the package from a mirror or local source, grabs dependencies, and installs it and any dependent packages. Then, a debconf configuration screen ("wizard") is brought up, in the interface that you've chosen, such as dialog, Gnome, etc., and you can configure it, or it configures itself dependent on the level of interactivity you told it you wanted before. Then, it exits.
Configuration - dpkg has a system that allows the package to prompt for a few options before it is installed. this is a good thing, but the packages usually don't ask enough. users need full customization (nothing nitpicky. big stuff... so you dont have do manually edit configuration files by hand.The packages ask questions based on the level of interactivity you chose when you configured debconf (or depending on a command-line option when you reconfigure the package). "Big stuff" is what's given to you. If you want to configure everything, editing configuration files is the way to go.
Available packages - this is where dpkg falls flat on it's face. 95% of unix packages are rpms. that never helps. a unified packaging system needs to be put into placeFrankly, 8600 software packages in one, easily accessible, central repository of stable, well-maintained packages seems a lot to me. Most packages someone would ever want are there, and others, those provided in RPM can be converted by alien to .deb format. Regardless, this has nothing to do with the quality of the packaging format or the packaging tools, so it wouldn't affect this. Any "next-generation" package format would start with no packages, so dpkg beats it at that.
Re:Is dpkg THAT bad? (Score:2)
I'm sure most of this could be fixed by a more experienced debian user. But the whole point is that it shouldn't need to be fixed in the first place.
No, it's not. (Score:3, Insightful)
It's buggy as hell - it's easier then signing up for aol to nuke your system this way (in other words, it happens quite often by accident)
Huh? I suspect user error. I've been using Debian/dpkg since pre 1.0 days, and I can count the number of times dpkg has had system wrecking errors on one hand. I can count the number of times that it actually wrecked my system on one finger -- after that, I got a little more cautious about upgrading dpkg in the unstable tree. (i.e. wait a few hours and read debian-devel), There are ways to tell dpkg to hose your system, but those aren't bugs, those are options with big nasty warnings next to them.
Now, there have been many more occurences of buggy packages screwing things up, but that's hardly dpkg's fault. And if you live on unstable, well, that's what you get.
No good front-ends -
apt-get install aptitude
(Not in stable, but coming soon[1] to a release near you.
Configuration - dpkg has a system that allows the package to prompt for a few options before it is installed. this is a good thing, but the packages usually don't ask enough.
Again, not a dpkg issue. If the package doesn't provide sufficient configuration flexibility, it's an issue with the particular package.
Available packages - this is where dpkg falls flat on it's face. 95% of unix packages are rpms. that never helps. a unified packaging system needs to be put into place
I don't know where you got that statistic. Yes, maybe 95% of packages you see floating around random websites are in rpm, but I doubt that 20 times as many software packages available as rpms vs. debs. Most upstream developers don't provide debs, because there's a debian developer to do so for them; the fact that mozilla.org has rpms but not debs doesn't mean there aren't .debs of Mozilla. (I'll allow that the ratio for non-free software is much worse, for fairly obvious reasons.)
Steve
[1] "soon" in Debian terms, at least :-)
Re:Is dpkg THAT bad? (Score:4, Informative)
BTW - You can actually use dpkg already with Mac OS X by installing fink [sourceforge.net] though it is an external project. It works well, has a fair number of packages. I use it and highly recommend it.
Dpkg / RPM doesn't do those things. Apt does. (Score:3, Informative)
Dpkg doesn't do that. APT does. Its not that these aren't great and massively useful features, its just that just like urpmi, APT works its magic across RPM (the LSB standard packaging system, which many app packagers primarily package for) too. I maintain an apt repository for my workplace with around 3000 packages, all in RPM format for Red Hat 7.2, and it works like a charm. Debian's policies are postable too - Connectiva has a similar set of guidelines. The unique advantages of Dpkg is suggested / recommended dependencies, something RPM desperately needs (Red Hat themselves cheat and use the `comps' file to provide this logic themselves in the installer, but us users don't have the luxury). RPM has some unique advantages too tho - transaction handling in the database (thanks to DB3) does wonders for my piece of mind. In the end, I'll stick with RPM and Apt-get because of the LSB stuff, and avaliability, but I do hope they add suggested / recommended dependencies soon.
Actually, if you want to see a system which kicks both their arse in many ways, look at QNX. They have apt-like features, a nifty `package filesystem', and a GUI installer that reallycraps all over every other software install system I've ever seen.
Re:Is dpkg THAT bad? (Score:2, Informative)
Umm, apt that does all that.
(note: debian isn't updated often, so this is generally unappreciated)
Debian is being constantly updated. If you are refering to the 'potato/stable' branch, then it in rarely updated. But 'woody' and 'sid' are being updated very often.
I'm not sure what else there is that makes it good. But RPM certainly doesn't have these features.
There is a version of apt-get for rpms that had recently been released. Not quite at the debian level, but still better than nothing.
It's buggy as hell - it's easier then signing up for aol to nuke your system this way (in other words, it happens quite often by accident)
I've never experienced any bug's in either apt or dpkg, though I've never used the sid/unstable branch. (And if I had used sid/unstable, I would have no right to complain about bugs, just to report them on bugs.debian.org [debian.org]).
No good front-ends - There is no good program to browse available packages, install them, enter configuration information (more on that in a sec) and remove them. You should enter the package you want to install. a wizard is displayed, it grabs the package from a mirror or local source, solves dependecies, installs it and any dependent packages, configures it, and exits.
Aptitude, Deity? What is wrong with them? (Make sure you get them from woody)
Configuration - dpkg has a system that allows the package to prompt for a few options before it is installed. this is a good thing, but the packages usually don't ask enough. users need full customization (nothing nitpicky. big stuff... so you dont have do manually edit configuration files by hand
If every package asked many questions then you would never finish an install. Anyway, debian lets you choose the amount and importance of the questions you are asked. Thats what debconf is all about. And compare it to redhat, where you aren't asked any package specific questions during install.
Available packages - this is where dpkg falls flat on it's face. 95% of unix packages are rpms. that never helps. a unified packaging system needs to be put into place.
Woody has 8602 packages atm. Debian does have a unified packaging system. All packages use the same package format, are found, retrieved, and installed using the same applications, and are kept together in a central location ( ftp.debian.org [debian.org] - and its mirrors). Nothing else compares.
i dunno what i forgot?
I dunno, what other debian fallacies can you think up.
speed gains (Score:1)
Wasnt the hole point of fere bsd ports section so that i could have a version optimized for my arch?
If so wouldnt binary versions suffice eg apt?
i like apt but would love to have th eoption to have stuff build for my box ony but dont see the point for a platform where the processors are more similar. Or am i missing out on something?
Is a bin optimized a G3 different from G4 ?
Re:speed gains (Score:2, Informative)
The G4 differs from the G3 in a lot of piddly little ways, and one great big, huge way. The G4 has the AltiVec engine (a vector computation unit, alongside the Integer and Floating Point Units). However, it's a little specialized and (afaik) there aren't any compilers that automatically generate AltiVec-optimized code. On the other hand, programmers can use AltiVec (vector engine) accellerated functions, which will then be much faster on G4's than on G3's.
As an example of an AltiVec accellerated function, look up the man page for writev(). Basically, it's a vectorized (and thus accellerated) version of write().
Re:speed gains (Score:1)
implemented using some altivec insns on G>=4,
writev really doesn't have anything to do with
altivec. writev is an optimization introduced
in the pre-BSD days, to allow one to make several
writes with just one syscall. The 'v' in writev
is the vector of regions to write in the file.
Re:speed gains (Score:1)
It is an ancient unix optimization in which a
number of write(2) operations are combined in
order to save on syscalls. It's like an old
vector machine's scatter/gather, but for memory-disk
operations instead of register-memory operations.
One could conceivably use some altivec insns in the
implementation of writev, but one could do that for
any syscall implementation, really, so it's not
directly bearing.
Idea seems nice (Score:2, Insightful)
Re:As an OS X user i can say this (Score:1)
Hindsight is 20/20 (Score:1)
This is great on a open system that can be changed by anyone at any time, and if someone screws up and a feature get ruined, it can be easily changed before it is really catastrophic.
On a production OS that will be used by people who don't know what a CPU is, and can't tell
He doesn't really say this, he suggests modifications for his protocol, including:
Meta data in xml files instead of the makefile
Front-end development facilitation
Use a database to keep the data to keep the descriptions to make it easier to use
All of these are manageable to implement seperately on FreeBSD or wherever before putting them into a new OS, where a screwup would be a huge disaster.
The Arusha Project (Score:4, Informative)
We use a simple XML file-based (i.e., you can edit everything with vi) object-oriented database. The project isn't just about package management, but we implemented a full multi-platform build-from-source-and-install-sitewide package management tool. It also handles dependencies etc.
Also look at Gentoo (Score:2, Interesting)
If it ain't broke, it doesn't have enough features (Score:4, Insightful)
What I would hate to see are any major revisions if it's just gonna add some feature; I would rather see that time spent on developing the ports and packages themselves. Make is a good, simple, foundational and almost always present solution. Adding other languages would be a waste of time IMHO.
Let me condense what I think should be pursued from the ports perspective: documentation and ease of use. One can always make readmes and get mini-descirptions, but that really should be expanded upon, both for beginners and seasoned users who just don't know what that software is about. It would be nice to have some options like, info that would go thought the ports tree and build more verbose information. If those documents are built in a consistant manner (such as xml), then any ol' front end can be built to pull the info on the port and automate building the port and the flavors available. For example, a simple curses interface that will inform you of the dependancies that will need to be built first, estimates the size, and gives you a list of flavors to add into your build. Hit ok and it monitors the progress for you, logs the process and keeps the messages out of sight (from those who get scared easy).
I agree that something should be done to be able to automagically build a package from a port. I think this area would be the best to pursue. Even better, if we bsd types could get a system like checkinstall /installwatch [asic-linux.com.mx] consistantly, not most of the time, but consistantly working on BSD. This project essentially is a wrapper script that records everything make install does. In current form, it gives you the option of building an RPM from that make install. What should be pursued is making this work -well- on bsd, with the option to build a package along with documenting it's dependancies and/or recording the install info into the existing system to that all one has to do to remove what you just built is 'pkg_delete'. THAT would be cool!!
OpenPackages (Score:3, Informative)
Here are some references i included in my darwin-devel post:
http://openpackages.org/html/pkg_design.php
ht
http://openpackages.org/pipermail
http://openpackages.org/pipermail/o
Fink already does much of what Jordan suggests (Score:5, Informative)
http://fink.sourceforge.net/ [sourceforge.net]
600+ OS X ports so far, automatic updates,
database indexing, built on top of dpkg.
Re:Fink already does much of what Jordan suggests (Score:2, Insightful)
Jordan was calling for more advanced internals (XML-based index, separation of data from control from execution engine, etc.), and not just smooth functionality, which has evolved to a good point in existing systems. Truth is, fink is very inflexible (e.g. little choice in install directory), offers very limited individual configuration options, and has its data entwined with its execution engine.
What Jordan suggests may even imply a step back in functionality at first, but I do believe it's the way forward, long-term.
Re:Mirror? (Score:3, Informative)
When is it good enough? (Score:2, Interesting)
The problem is UNIX isn't designed for the average user. When you look at it from a Unix perspective, it works great. When you look at the rest of the world, it doesnt. And anything thats a single platform does not create a big issue. You won't see it used on all your open-source console apps, mainly MacOS X applications. This is definately needed in Mac OS X.
OT: The Myth of the Average User (Score:3, Insightful)
You can wail on about this average user but you must be careful about this cliche. Because the implicit fallacies is that there is an average user of these systems, that they have far less experience than you or I do, and that they aren't already happy with what they are using now. If there is anything more certain that can be said at all about these average computer users is that they probably don't want to change operating systems right now. In fact, not only would there quite possible be no reason for them to switch operating systems right now but it would mean erasing the skills they have learned using the current operating system. Its no coincidence that those who do migrate to other platforms have little to loose. If you think that these so-called average users spend all their time surfing web pages and sending email then there would probably be a lot more migrators to other operating systems.
I think it is useful to consider a few things I learned in a Cultural Geography class last semester. I know these things are pretty much common sense but I think its not only useful to consider these ideas but to introduce some new terms when dealing with these things (rather than using impoverishments like average user). When people migrate from one region of a country to another, they do so for a number of reasons. There are push factors and pull factors.
So why would someone move to a Free operating system? It seems that freedom itself isn't much of pull factor (but this would change, surely, once many of these software laws and licenses are really enforced against end users, not just distributors).
Let me say something about ease-of-use. While it would seem to be an obvious pull factor--the days of easy to use general-purpose operating systems are long over, I think. Perhaps the first Macintoshes were among the easiest systems to use and the reason for this is quite simple. The needs and expectations of users have gone up quite a bit since then. While I have never used these early computers nor do I know the intentions of the Apple staff (these things are probably clearly documented somewhere...I'm too lazy to look right now), I would suspect that they were trying to make as easy as possible to type out documents with relatively sophisticated typessetting (compared to typewriters!) and then to file these documents into a filing system.
Today's systems are expected to require quite a bit more. Many of the posters here on slashdot carrying-on on what these operating systems need to be successful (in what ever definition of success, most do not say) give examples:
The paradox is that this average user needs all of this. This seems extremely unfair to anyone trying to implement an operating system...nowadays to any team of programmers or consortium of developers contributing to an existing free software project.
Now lets consider what relevent about talking about the average user. Like I said before, I doubt this user would switch his or her operating system for any reason. This is because for everyday this user masters his/her OS, the push factor from every other OS becomes stronger and stronger. Unless there exists a push factor from the OS he is currently using, he's gonna stay.
So lets forget this average user since it isn't relevent or even interesting. Lets consider, instead, a different class of users. Lets just create a class of users who might or will definitely switch to another operating system. Now, awaiting to be smacked around with a stack of statistics proclaiming otherwise, I would guess that this set of users would have the following things more or less in common:
And where would you find this average set of OS migrators? Probably on the internet: in newsgroups and web forums. Specifically, you would find that many of these people read and post to slashdot regularly.
And thats the point to this entire post. I find it interesting to hear slashdotters condemn the intelligence of the average users, how they can't program, or they can't figure out the command line. This might be true, but they are revealing their own experiences more than anything else. They are their own breed of software users.
In conclusion, You Are the Average User.
Ports Forever (Score:2)
Uhm..yes ! (Score:1)
A GUI FrontEnd for the ports collection etc. would be great ! Especially for the folks coming from point-and-click-MacOS 9.
Having used Fink for a while now, I have to admit that it is indeed pretty cool, but I can tell you that a *lot* of people I know that are migrating from MacOS 9 to X right now won't touch anything like it at least for years.
Then again, they may not be exactly *the* target audience, but hey, more users is always a nice thing.
So when Jordan and his "helping hands" are building the next-generation ports collection basically from scratch, it would be nice to consider an optional graphical FronEnd for it too !
Re:Ports Forever (Score:1)
A Look Through the Ports Collection [onlamp.com]
Why Metadata in XML ? (Score:2, Interesting)
"1. All the descriptive text which current goes into a Makefile like the
port's name, version, dependencies, URL for source bits, and so on,
should go into an XML file. This immediately allows the port to be
indexed, documented and modified from automated tools which traverse the
"ports collection" (sorry, I have to keep using that terminology since
it's what I'm most familiar with
interesting reasons. The reason a lot of these tools don't exist for
*BSD today is that extracting data from Makefiles generally sucks from a
parsing perspective so people aren't encouraged to get too creative."
To reprase.
Metadata should be in XML, this makes it much easier to process than if it has to be extracted from Makefiles
But why does it have to be XML (or SGML or RDF for that matter). The only reason i can think of is that there are pre-existing tools to parse it, the tradeoff is that any Markup language will bloat your metadata and make it unreadable.
Shouldnt a packaging system be important enough to warrant the development of its own parsing routines. i.e. just design it to be the best it can be
Re:Why Metadata in XML ? (Score:3, Interesting)
What about bundles? (Score:1)
What happened to openpackages.org ? (Score:2)
Portage and Gentoo Linux (Score:1)
http://www.gentoo.org
I have to say that portage is by far the easist package management system I have used so far.
One nice thing about gentoo btw, is that the entire distrib builds it self when you install, downloading the latest packages from the net... So your installation is always current.
Its also incredibly easy to customize.
You start off with a very basic system... Then say you want kde? you would type "emerge kdebase/kde"
and portage will take care of downloading, compiling and installing everything you need (inc xwindows) to have a working kde desktop.
ps: Anyone else notice that slashdot is now 'apple' candy like? LOL ERaa.. when did that happen?
Re:Portage and Gentoo Linux (Score:1)
Re:Portage and Gentoo Linux (Score:1)
You obviously havn't looked at portage or tried to use it. Its just a wee tad more powerful then BSD ports, but don't take my word for it
I have tried looking through your site and downloading some sample code and looked through it. All I see about your project is some "sample xml" and lots of comments about "sysadmins working together" and "value add".
Quite frankly all I see is some XML with little actual content (XML for the sake of XML?) and alot of hotair. Perhaps you could point me to a working example or a howto showing a sysadmin how to build a working production system using your
(BTW, if you havn't I'd recommend playing with portage... you can even give it a whirl on an existing linux install, just chroot it per the instructions. it might give you a few ideas
Re:Portage and Gentoo Linux (Score:1)
Whoops... I am sorry... I got your name mixed up
with someone else. So ignore the stuff about "the other project". I would still recommend checking out portage though because it is more then just a rewrite of ports with python
Sigh... not a good night for me
Re:Portage and Gentoo Linux (Score:1)
I've tried to try to use portage, but I never could get Gentoo installed. I've built LFS before, but my problems stemmed from unstable code and my lack of willingness to go in and fix the scripts and stuff so it would build the way it was supposed to. This was rc6 I believe. I'm going to wait until the final release and try it again then.
In the meantime, reading over the portage information, I can see where it may be a bit more beneficial than ports for the packager, but I still can't see where it's advantageous to the end user. The "profile" sounds nice, but is it that much different from editing the excellently commented make.conf? The basic official ports system doesn't do upgrades well, but using the portupgrade package, it's a snap. I expect it to become an official tool in FreeBSD 5.0.
Re:Portage and Gentoo Linux (Score:1)
New ways of thinking (Score:2, Insightful)
Re:New ways of thinking (Score:1)
Don't forget, Jordan Hubbard now works for Apple. So, technically, Apple IS spearheading this right now.
Re:New ways of thinking (Score:1)
I can't imagine your dad having to enable root or reconfigure folder permissions. You didn't have to enable root to do that either - Apple wants you to use sudo for that kind of thing rather than enable root.
That being said, Apple really should integrate these things in the GUI even if the average user will almost never run into the problem. I don't have the most up-do-date version but last time I looked I had to change permissions from the command line. There should be a GUI way to do it (from get info perhaps?) and EVERY time you try to do something you don't have sufficient permissions to do a dialog box should pop up to enter your administrator username and password so you can do it anyway.
Re:New ways of thinking (Score:1)
A few calls later to tech friends and I learned that Apple's filesharing wasn't implemented the best way. My Mac was checking each and every unique directory every time it booted when turning on file sharing. So I reinstalled OS9 and just used a shared folder. Now file sharing turns on in seconds. I still need to reinstall OSX. Enabling root and altering permissoins didn't fix the problem.
With that history in mind, I can't imagine I'm the first person to copy new permissions across a drive. OS9 shouldn't affect OSX that way. I figured things out quickly and fixed them, but I'm new to OSX and "sudo" isn't in my vocabulary yet. It's not like OSX comes with any docs either. It's just not ready for primetime.
The point of all this, and how it relates to Jordan's comments, is that the typical Mac user is NOT a *nix user. Apple's trying to sell to two different markets. It's a tricky balancing act, IMO. While the engine under the hood needs to be suped up to the max and fully featured to excite *nix users, the dashboard needs to be accessible to people like me who are new, and worse, people like my parents who just point and click and expect things to work without knowing to sudo instead of enabling root. :)
D
Package Management is Key For Competition (Score:1)
Open source software continues to improve by leaps and bounds. Unfortunately, the uninitiated tend to be terribly confused about how to get it, install it (including installing dependencies etc), update and maintain it. I think the problem stems from a few things including:
- The somewhat archane nature of open source software and unix...keeping out those who aren't in on the joke so-to-speak.
And, more importantly,
- Competing package management systems
I think competition is a great thing. I think it is far to late to advocate the abandoning of the existing ports collection, RPM, deb, etc systems because they are entrenched. But the end result is that a project must have someone who does packaging for all these systems if they want full exposure.
This is why I think we need something like a package repository run by volunteers who create, maintain and store the packages. If it were a single system (distrbuted across many servers around the world of course) then you could have a single, standard interface for searching and downloading packages. This way if someone knows how to use the system on Red Hat, they'll know how to use it on OS X.
Eventually, a dominant package format may emerge. Hubbard has some good suggestions (XML descriptions is a no brainer). Ultimately, the ease of use of this stuff is key. You want developers to be able to serve all package communities with as little effort as possible, and you want the users to have the most comprehensive access to packages available.
Once this happens, people buying Macs or installing Linux will have a huge advantage over their Windows mired breathren because once they log in they'll be a couple of clicks away from thousands of ever evolving applications. While Windows users will be thousands of dollars away from a couple of applications.
Ports Packages Fink Foo Fum (Score:2)
I've tried fink, and it seem terribly unintuitive and clunky to me, and I don't really use it, prefering to get packages from macosx.forked.net, because they just install, and I don't have to screw with funky menus that don't work simply, and I don't have to hunt for fink packages that aren't in my menu, even though I grabbed the latest list.
Sure, I could spend the time to figure fink out and get it working properly. Hell, I could be using netcat to write this comment, but I'm not going to.
People who actually need to get stuff done, need intuitive and simple tools. Fink and the like are fine for home hackers/users, but when you need to get stuff done, fink doesn't cut it, I'm sorry.
FreeBSD Ports/Packages is very simple, and easy to use, and gets the job done well. It's probably the main reason I use FreeBSD servers (aside from the insane levels of stability).
I can understand Jordan wanting to move above and beyond, I just ask that he keep things simple and intuitive. Something I can get around in with a small, half-page cheat sheet.
Re:Ports Packages Fink Foo Fum (Score:1)
Re:Ports Packages Fink Foo Fum (Score:1)
Correct me if I'm wrong -- and I may very well be, as I haven't had the pleasure of using Fink -- but as it is based on apt/dpkg, why should you have to use menus at all?
To elaborate somewhat, I understand that Fink supports apt-get in all its glory, and as such apt-get install (perhaps prefaced by an apt-get update to get the most recent listing) should theoretically install any package supported. Moreover, apt-cache search and related commands would be invaluable at discovering the names of packages (sometimes they aren't immediately obvious).
On a side note, I use OpenBSD for my firewall and let me simply say: I love the ports tree. Really a stroke of genius on Jordan's part.
However, the simple truth is that the average user we're all fond of blabbering about simply isn't equipped to deal with compilation as a concept. Sure, it may just be a simple 'make' command to us, but the idea of compilation is necessarily foreign and scary to the novice who simply wants some extra software.
And what exactly is the point of having each end-user compile his own packages? MacOS X only runs on Macintosh hardware, so cross-architectural support (which is what Ports was designed for) is sort of moot. Binary packages install more quickly, and remove any possible problems in compilation (since you needn't compile).
Subpackages(?) would be good (Score:1)
Basically some parts of the package should be optional.
Of course this could be extended to binaries as well. So you can have a single package for several platforms, the package installer chooses the binary for the correct platform.
Re:Subpackages(?) would be good (Score:2)
Subpackages and package variations are two major things missing from all the candidate systems. I mean, Windows can manage to do this, why can't we? Why can Windows users select "custom" during install but we have to take whatever the packager decides to give us?
One example: Dia. Under FreeBSD the port and package requires Gnome. But Dia will build just fine without the Gnome libraries. Everytime I want to install or upgrade Dia I have to go in and edit the Makefile.
Proposal 1A: All meaningful configuration options should be easily available to the user of ports and any successor to ports. All major configuration options should be easily available for packages. Let me choose "custom->without-gnome" when I install Dia.
Another example: KDE. This already exists as a meta package in FreeBSD, but a metapackage is not the same as a set of subpackages. Uninstall KDE and nothing actually gets uninstalled, as all the dependencies are still there. At an even finer grain of detail, maybe I don't want to install everything in kdegames. Maybe I just want shisen-sho and patience. Allow me a way to install just what I want out of a package. The Debian way of splitting packages into smaller packages (instead of subpackages) is not an optimal solution.
Proposal 1B: A port or package that contains optional or secondary components should allow the user to choose what components they want installed.
The defaults should remain for obvious reasons. But the package manager should allow -full, minimal, and -custom installs. For a CLI installer, these could be switches (with -full being the default), and for GUI installers they should be selectable options. For some packages it won't make sense to have more than one option, but for most it will.
These proposals may mean the replacement of metapackages with superpackages. It will mean more work for the package developers, but that's outweighed by the benefits to the user.
Re:Subpackages(?) would be good (Score:1)
While this would theoretically be good, I fear that it isn't really feasible. Why?
Because most open-source programs were not designed this way. Whether a library dependancy is compiled in or not changes the resulting binary; this means that, say, nethack (compiled with x11 support) and nethack (compiled without) are fundamentally different binaries.
As such, these two would have to be different executables, of similar size. Including both in one package at download time would of course waste bandwidth.
But the situation gets exponentially worse. Consider a package which has n options (all independant of each other, for the sake of computation). Then we have 2^n different package configurations! Clearly, building each and placing them all in the same package to allow users to choose would eat bandwidth unacceptably. And we haven't even added all the different architectures (although on MacOS X there is admittedly only one.)
So let's say instead that each of these packages exists somewhere, and the package you have simply references them, and (at your discretion) asks you which options you want compiled (or else picks the package best supported by your system), and then downloads the appropriate version.
How is this really any different from our current setup, where we instead have several versions of the same package for commonly requested options?
Re:Subpackages(?) would be good (Score:1)
But one solution to the problem with giant packages where only a small part is used is the one used in the NeXTstep installer packages. In the info file you can define a URL that the actual files should be downloaded from. So the Package includes all info about it, dependenciesm size, version, an icon and so on. Except the files.
The NeXT packages was limited to downloading the whole package from one url. But there's nothing saying that it wouldn't be possible to have each subpackage as different downloads. In fact that is getting more and more common on windows, one example is Mozilla. But then there is also a mozilla package with everything included, so you don't have to download anything.
/Erik
Bandersnachi (Score:2)