Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
OS X Businesses Operating Systems Apple

Rendezvous Developer Stuart Cheshire Interviewed 145

overunderunderdone writes "Found this interview of Stuart Cheshire, the Apple employee who developed Rendezvous (a.k.a. Zeroconf) and co-chairs the ZEROCONF working group. He provides some interesting history behind Zeroconf. But I thought his ideas for the future of Rendezvous was more interesting. He envisions a single protocol for everything from the keyboard, hard disk, peripherals, to the net connection -- just one kind of socket in the back of your box."
This discussion has been archived. No new comments can be posted.

Rendezvous Developer Stuart Cheshire Interviewed

Comments Filter:
  • Number 1: Bolo
    Number 2: Apple Events (PPC) over TCP/IP.
    • BOLO ROCKED!! (Score:4, Insightful)

      by KelsoLundeen ( 454249 ) on Friday July 19, 2002 @02:41PM (#3918713)
      Bolo, hands down, rocked.

      I remember playing Bolo in the University of Michigan's massive "fishbowl" computer lab around 1993. And I remember Bolo sent the entire support staff into a frenzy: everyone was playing it.

      Of course, those were the days when the lab consisted of hundreds of PowerMac 6100's but only a handful of Windoze boxen.

      Anyway, Stuart, if you're reading this: Bolo was (is?) a fantastically cool game. Many hours when I shoulda been working were spent tooling around in top-down tanks, pummelling pillboxes.

      Slightly, off-topic, but I'm afraid there's a new generation of comp-sci students out there who missed out on the glory days of Bolo. (Of course, I get misty-eyed when I hear someone mention TRS-80s and Z80 assembly language, but that's another story -- and another era missed out by today's new generation of computer hot-shots. Not to mention the whole mid-80's coin-op video game revolution. To think, there's a whole bunch of folks who don't know what it's like stack a row of quarters on the top panel of a Pac Man or Donkey Kong stand-up game...)
      • Pac Man and Donkey Kong?!?! Those were weenie games. Real men stacked their quarters on a Defender/Stargate or Tempest or Sinistar.
        • Defender

          Probably one of the best (and most innovative, gameplay-wise) side scrolling games ever written...except for those pod ships.. I *hated* those little red fighters that flew out of them.

          Good sound effects too.

          Sinistar

          Run, Coward! :)

      • someone mention TRS-80s and Z80 assembly language, but that's another story -- and another era missed out by today's new generation of computer hot-shots

        Actually, the Z80 is used in the TI-82 through the TI-86 (the M68K was used in the more expensive TI-89 and TI-92s). So any computer hot-shot who goes through calculus or even algebra probably owns a computer with the Z80, and many have programmed for it in assembly language.
        • yup, the Z80 on my ti-83 (original) is what got me into CS in the first place. I loved playing around with the code in stuff like duckhunt and ztetris, and all of Joe Wingbermuehle's stuff. Ironically, I just recently found out I lived about 20 miles from Joe and go to the same school as him (go UMR!), and have never met the guy.
      • Bolo at the University of Michigan was top notch. But this was because of the crack team of CAEN employees that formed the world's greatest Bolo powerhouse that ever existed.

        _Red_
        Black Lightning
        Luxembourg
        white

        P.S. _Red_ and Lux were the authors of BoloStar and BoloShop, the original powertools for map editing. (hey, is that my horn? TOOT! TOOT!)
      • Bolo *soo* rocked.

        I just got a cable modem after 2 years without a decent net connection, and one of the first things I did was snag a copy of Bolo, and look for a game:

        http://bishop.mc.duke.edu/bolo/

        Unfortunely, no one was playing...but I prolly stink at it now anyway.

        Bolo Anecdote: The first time I heard about Netscape was during a little post-game chat between the players. One was from netscape.com (or was it still Mosaic Communications then?) and one from NCSA. A third party was dogging on the NCSA guy because his browser was about to get whooped by the newcomer.

        Ah, the nostalgia...

        - H
        • A month or two I checked out Bolo, too. I played a game with a friend in Dallas. Before we were done, another guy joined our game.

          So, there ARE people still playing.

          Bolo is a great game for the same reason all great games are great games: Sound fundamental concept/design.

          --Richard
      • I lost my Little Green Man in 1995. I've been looking for him ever since.

        Oh sweet sweet Bolo how I miss you. I only got to play it at a University once. (I'm in Unveristy now so the glory days of Bolo predated my post secondary)

    • by frankie ( 91710 ) on Friday July 19, 2002 @03:38PM (#3919020) Journal
      Hey, don't forget his (sadly unimplemented) contribution to improved modem communications:

      Number 3: It's the Latency, Stupid [google.com]

      Stuart's official title at Apple is "Wizard Without Portfolio", meaning smart guy not tied to any one department. He's earned that.
      • Ironically, the Apple Geoport telecom adapter, which has suffered so much criticism, may offer an answer to this problem. The Apple Geoport telecom adapter connects your computer to a telephone line, but it's not a modem. All of the functions of a modem are performed by software running on the Mac. The main reason for all the criticism is that running this extra software takes up memory slows down the Mac, but it could also offer an advantage that no external modem could ever match. Because when you use the Geoport adapter the modem software is running on the same CPU as your TCP/IP software and your Web browser, it could know exactly what you are doing. When your Web browser sends a TCP packet, there's no need for the Geoport modem software to mimic the behaviour of current modems. It could take that packet, encode it, and start sending it over the telephone line immediately, with almost zero latency.

        Not trolling, just curious: Isn't this like a WinModem?

        • Yes, but this was in 1993 -- many years before anyone else (to my knowledge) was doing this sort of thing.
          • So why is he such a visionary if Winmodems are so bad? I hope the answer is something like "Winmodem manufacturers don't implement the optimizations he proposed, even though they could in principle." and not something like "Win-anything sux0rs, d00d."

            If these software optimizations are possible, then the open source winmodem drivers should eventually eclipse RS232-attached modems in low latency, right? (though this may be moot these days) Are {ISA,PCI,USB}-attached modems not hampered with these latency issues? Why don't more hardware tasks get moved into the CPU core if such optimizations are possible?

            • So why is he such a visionary if Winmodems are so bad?

              Winmodems aren't necessarily bad they just have tradeoffs. For example, see this link [56k.com].

              I think the main reason people around here don't like them is because the companies that make them are usually pretty protective of their software drivers. So no one can write Linux drivers for them without some kind of massive reverse engineering.
        • It is the precurser to the WinModems.
      • Modem latency (Score:4, Interesting)

        by Animats ( 122034 ) on Friday July 19, 2002 @05:47PM (#3919734) Homepage
        John Carmack was, at one point, going to spend some time seeing if Winmodems (which are mostly host-side software) could be reimplemented with a different link-level protocol for better latency.

        The basic problem is historical. There are several layers of emulation in a modem for historical reasons, and each of them adds latency.

        Today's modems are actually synchronous devices with a block-oriented link level protocol connected to a parallel data bus inside the computer. But, for historical reasons, they pretend to be asynchronous byte-oriented devices driven off serial ports. This legacy introduces two sources of delay.

        The first source of delay is that modems that emulate serial ports clock data into the modem byte by byte, and not at all that fast a rate. This introduces delay just moving the data from the CPU to the modem's buffers, even before it reaches the phone line.

        The second source of delay is the conversion of a byte stream to blocks. The modem sees a byte stream to be sent. It's probably a PPP byte stream, divided into packets, but, typically, the modem doesn't know that. The modem has to block that data up into blocks and send it. But the modem doesn't know where the packets begin and end. It has to guess. Typically, it guesses by using an "accumulation timer", which decides that no more bytes are coming when some period of time has elapsed with no data. This, of course, introduces yet more latency.

        These two effects interact, making things even worse. The delay in the accumulation timer has to be scaled to the simulated async "data rate", making the unwanted latency even longer.

        Modems today ought to be block devices, like Ethernet controllers. Then you'd just blast an IP datagram down to the modem over the PCI bus, and it would start going out on the wire immediately. But there's so much infrastructure built around async modem emulation and PPP that it's hard to change this now.

      • Don't 56k modems, or at least one of the most common implementations look for PPP packet boundries and send the buffer immediately?

  • Besides the fact that it's 403'd, the single protocol / plug type is already here with firewire, and/or USB. HD's, modems, NICs, mice all work on one of the two now. It's very nice, if not-so greatly designed.
    • Single protocol/plug is already here, yup.

      All you need is Firewire & USB 2.0 (not 1.0) and you can hook up to everything.

      Hey wait. That's not a single plug. That's 2 plugs.

    • by zapf ( 119998 ) on Friday July 19, 2002 @02:40PM (#3918709)
      I think the only thing that's completely standardized across all computing platforms is the power cable. Can we just connect everything with power cables?
      • Actually Apple uses the Firewire port on the iPod to provide power... So maybe, we should all just use firewire and ditch the power cable.

        MAK
      • Are you mad?

        Power plugs aren't standardized at all. If you don't believe me, walk into the closest Radio Shack and ask for a power plug. Be sure to refuse to give any specification beyond that.

      • Hmmmm.... would that be MAINS [theregister.co.uk] voltage? *grin*

        *snap*, *crackle*, *smoke*
      • I bet we could. Wouldn't be the fastest connection, maybe...
        But try to imagine this: Since almost all electricity cables in the wall of your rooms, or even the house (and maybe even the neighbours house and big parts of the rest of the world) are connected:
        You have a computer, connected to the "electricity-net". Now you decide to buy a scanner. You arrive at home with your freshly bought scanner and just plug it somewhere into your wall, et voilà, you can scan.

        Only problem are the big transformer you need for the mice...
  • He envisions a single protocol for everything from the keyboard, hard disk, peripherals, to the net connection -- just one kind of socket in the back of your box.

    Didn't apple already try this with firewire, and to a less extent, USB?
  • by Hitch ( 1361 ) <hitch@@@propheteer...org> on Friday July 19, 2002 @02:30PM (#3918636) Homepage
    I used to tell my parents that "of course you can set up the computer on your own". because everything on the back of the box was a different shape. and the mouse and keyboard (on their systems at lease) were color coded. so it's no biggie. you just keep plugging until it FITS. now, it'll be the opposite. of course you can set up the computer. because it doesn't matter where you put the plug. they're ALL right.
  • It's called IPv6....done.
  • by d0n quix0te ( 304783 ) on Friday July 19, 2002 @02:32PM (#3918648)
    ... and in darkness bind them. Just wait until Microsoft adds their IP to the protocol and mangle it so that you have a MS universal protocol. All others pay royalties of course.

    I am not very optimistic about universal protocols.
    • I believe you are referring to the ever famous "Embrace and Extend" (tm).
    • True, the whole "One Protocol" theory is a bit like putting all of your eggs in one basket, but, I believe Mark Twain said "Put all of your eggs in one basket, and Watch that basket. I don't think that this is a bad thing. Based on where this is coming from (a techno-philanthropist partially working for a non-profit-"Make-Things-Better" group), I don't think that we need to worry.
  • Lemme gues, this sounds great But am I betting Apple would want a patent on this "Single Protocol" ....Why not right

    Here is an opp for someone with a few spare bucks and a patent atty to stick it to the big boys, patent it now, grab it early, I mean how hard can it be to write a patent that says "One protocol to subde them all ?"

    I am kidding of course, writng patent abstract is a royal pain if youve ever done it, if they are good enough to withstand scrutiny that is.

    Im game, 1 protocol for everything lets call it HTTP, that would rock, native http hard drive, just plug a HD into the network, instant web server.

    I mean, wouldnt it be REALLY cool to have EVERYTHING talking back and forth in HTTP, then we would never have to worry about IIS or apache security again, it would ALL be swiss cheese.

    (I really hop all you slashdotters know the above is ALL in jest)
  • He envisions a single protocol for everything from the keyboard, hard disk, peripherals, to the net connection -- just one kind of socket in the back of your box.

    Wouldn't that be FireWire? It'll do storage, peripherals, and act as a transport for TCP/IP right now. No vision needed.

    --saint
    • While wireless Firwire is being worked on, Rendevous has it now, AFAIK.

      The sad part is the possibilities will be limited by access rights. Imagine viewing a DVD in one room, going to the kitchen to make popcorn and having the TV there synch up to the one showing the DVD. Don't think Hollywood wants that.
      • Apple is doing it anyway. Did you see the Rendezvous-enabled iTunes demo during the keynote? Phil was standing there with a TiBook containing an AirPort card. Jobs was at one of the desktop machines. Phil opens the TiBook, and his MP3 library instantly shows up in the iTunes window Jobs has open. Jobs starts playing a song from it. Phil closes the TiBook. The song stops and the library disappears. He opens it again. The library reappears.

        This is supposedly shipping early next year.
  • Rendezvous is great and all, but I doubt it will ever consume more of the poster's time than did Cheshire's most important contribution: the network tank game Bolo [stuartcheshire.org] *g*
  • Stuart Who?! (Score:3, Interesting)

    by JohnPM ( 163131 ) on Friday July 19, 2002 @02:43PM (#3918725) Homepage
    The only important thing to know about Stuart Cheshire is that he created the the mindnumbingly addictive global internet phenomenom called Bolo [duke.edu].
    It was the first big non-text game to be played over the internet and it was huge when I was at uni 1993-96.
    He created it for his PhD and it's good to see he hasn't been idle since then!
  • I'm thinking the name of the protocol will begin fire and end with wire. Wow, that even rhymes!
  • Umm yep, one interface with one company behind it. Nope there won't be any innovation snuffing there, only stiff licensing fees to anyone who wants to make a product that uses it... hmmm... I worry... but on a lighter side you while you chat about this idea you could: A: Eat Sharp Glass B: Clean your ears out with super glue C: Drive surgical needles under you fingernail then type your 400 page manefesto D: Sexually Experiment with Razor Blades E: Jump rope with a metal chain (insulated handles of course) on an electrified plate carried 2000 amps at 240 volts as you ponder how long it takes before you screw up F: Floss with a mono-filament thread or finally G: Go out and get a college education with a major in "Traversing the 12 Seprioths in a pursuit to become God"
  • by g4dget ( 579145 ) on Friday July 19, 2002 @02:55PM (#3918800)
    People didn't invent all those different protocols because they like to make your life miserable, they invented them because they are optimized for different things. Firewire, for example, can be used for high speed networking as well, but people don't use it for that much because it is electrically not as nice. IR, Bluetooth, RS-232C, USB, Ethernet, Gigabit Ethernet, etc., they are all there for a reason.

    You can make things fast, cheap, simple, easy to configure, secure, etc., but not all at the same time. That is the first lesson of engineering, and it applies to peripheral connections as much as it does to other engineering problems.

    • No, they invented these differenct protocols because one did not exist at the time that suited their product's needs.

      And while the idea of having a ubiqitous protocol (sort of like the protocol equivalent of XML) would be beneficial in a lot of ways, it is somewhat naive, IMHO, to think that it could be achieved. New ideas come up all the time, and people have created new protocols, languages, and ways of exchanging those ideas simply because the idea does not fit within the scope of that which already exists.

      It's called innovation. As long as people continue to innovate, new ways of implementing those ides (protocols) will need to be created.

    • by Sabalon ( 1684 ) on Friday July 19, 2002 @03:29PM (#3918957)
      IR is basically a serial port that uses IR instead of a wire to make the interconnect. Bluetooth uses 900Mhz to make the interconnect instead of LED's. USB is another serial protocol, just faster than RS-232. Same with firewire, just faster still. For the most part they are really just varied serial ports.

      Ethernet(10BT,100BT,1000BT) is a bit different.

      What we are seeing with firewire and USB is a progression of serial protocols towards an ethernet kinda environment, where there may be more than one device on the line at the same time and you can have little hubs/switches to coordinate the traffic.

      I wouldn't really say that many of them are optimized for one thing or another. IR is just as limited as an RS-232 port, just made it easier on notebook designers to save space. USB was done since rs-232 had some serious speed limitations. Firewire for pretty much the same reason. Bluetooth - well...it was more of a way to elimnate wires...cool idea.
      • I belive that GSM uses the 900Mhz area (well, at least at europe) while blue tooth uses the 2.5 Ghz area..
      • Everyone is leaving out a very important factor: COST! That's the reason different protocols exist at all. RS-232 is one of the cheapest to implement, many CPUs can take it right in on a chip. Also, it works over long distances (100+ feet), something that firewire, USB 2.0, IDE, etc CAN'T do. The compromise: speed. On a desktop, it's not that bad -- you can drop the distance requirement and crank up the speed, but it takes a more expensive chip. Serial-type busses are still cheaper than trying to integrate an ethernet controller and a processor just to move packets around. Let's not forget that TCP/IP has a LOT of overhead compared to a serial bus, and you don't need routability when you're going from point A to point B. Different applications, different protocols. It's just a matter of cost.

        Anyone remember the Apple Desktop Bus back from early days? That was pretty slick in itself. Right idea, but it was just a tad too slow.
        • Good points about the cost and distances, but most of the devices designed for the USB and firewire are not things you generally have a 100 feet away from the computer.

          As for the cost, it would appear that the vendors have decided cost regardless that putting 6 USB ports on a machine is better than 1 or two serial ports. Gives the users more flexability and the speed makes their products seem faster to the user as well. For things like temperature sensors, serial is awsome (plus easy to program for).

          Ethernet doesn't have to use TCP/IP, but yes, it is overkill.

          What I want is something that is a generic bus that everything plugs into. VCR outputs, receiver in/out, tv, computer, mouse, keyboard, etc...

          That way the VCR can put data on the bus and the computer can pull it off and make use of it. Or a KVM would become as simple as having computer a take its input from the keyboard device 0x245 and mouse device 0x243 and sending output to video device 0x599. Wanna switch, send a signal out on the bus.

          Then again, I guess adding fiber and the logic would make my $20 mouse cost a LOT more. Guess I don't wanna pay $540 for a toaster that can send my TV an image of just how well done my toast is.
    • I think you're right that there's a reason for all the different protocols (efficiency), but I think we're going to see the same thing here that we see with software development in terms of people being "lazier" (and making one protocol) becuase they can.

      There's a reason that people wrote all that assembler code years ago... because they needed the speed. Nowadays, when I'm writing my Wonder Widget 2000 (tm) I "should" write assembler because it would be a lot more efficient, but I don't have to because everything's faster.

      Same thing here. We "should" use all the different protocols because they're more efficient, but as all the hardware gets faster, if I run at 1Gb or 1.2Gb, in reality for most applications, who cares? Ok, so the numbers are made up, but you get the idea.

      On top of that, there are benefits to being able to use standard stuff (maintenance, bugs, etc.).

      Hardware, just like software, will standardize because it can and there are side benfits.

    • They all are optimized for one thing: Moving digital data. The whole point of any port on the back of a computer (except for the power port, of course) is to transfer data from point A to point B. Also, the Universal Port doesn't have to be "optimized" for a single purpose. Just so long as you can teach a monkey how to use it, and the monkey is happy with the port's proformance, then it works. Sure, you could make a port that is faster/cheaper/etc., but no one would buy it, because the Universal Port is everywhere, and it is easy enough, cheap enough, and fast enough.
  • If you spend any time around UI designers, quite a few of them think this way. We work hard to try and simplify the onscreen User Interface so that it becomes transparent to day to day work, but you get behind your computer and there are so many cables, you can't see anything else.

    Jef Raskin had a neat idea for a universal, hermaphroditic connector, where you don't even have male and female parts, you just slap two of the connectors together. It always fits, it always works. Neat.

    There's an actual picture of what I'm talking about in his book. It's kind of hard to visualize until you see it.
    • by Animats ( 122034 ) on Friday July 19, 2002 @07:30PM (#3920236) Homepage
      Hermaphroditic connectors are rarely used. They should be used whenever the application is symmetrical. Twisted-pair serial connections should have been hermaphroditic since their inception. Not only does this resolve the connector gender problem, it resolves the straight-through/crossover cable problem. There's only one kind of cable.

      The problem isn't going away. Ethernet null-modem cables are widely used with DSL modems, providing an unnecessary source of consumer confusion.

      The video production industry [telecast-fiber.com] gets it. The SMTPE standard connector for fibre-optic broadcast work is hermaphroditic, much to the relief of production crews.

      There ought to be hermaphroditic connectors in USB and RJ11-like form factors, but there aren't.

      • by Anonymous Coward
        Apple already support part of the idea - recent PowerMac and Powerbook dont care if you use an straight or twisted cable - it senses what sort you uses - and if you connect to an hub/switch or direct to an computer - and sets everything right.

        Plug in ethernetcable, and it just works.

        Agree that it would be lovely to get rid of female/male connectors
    • a universal, hermaphroditic connector,
      Says the person who has obviously never tried to push two hermaphrodites together.
  • A single protocol for everything, eh? OK. So when someone finds an exploit, they can break EVERYTHING on your system/connected to your system. Wonderful. And let's not forget: 1 protocol = focal point for exploitation which means more manpower to break it which means gets broken even faster than an IIS box. Again, Wonderful.
  • I read somewhere else that this will go out and find all the printers (and other devices I guess) that are on your network so you have less work to do.

    Sounds like it will muss up a clean running IP network like IPX did. If so, no thanks.
    • Sounds like it will muss up a clean running IP network like IPX did. If so, no thanks.

      But nothing IPX can do is worse than someone with a lot of bandwidth and a duplicate IP address can do.

      IPX without SAP/RIP spoofing was murder on really, really low-bandwidth WAN links or cheesy imitations thereof that used long-haul bridges.

      But IPX (and Appletalk) had a lot of *good* things about them, too that take much more work and much more complexity to achieve in IP. Automatic client node addressing -- it just works in IPX, in IP it takes a whole infrastructure (DHCP server, integration with DNS, etc).

      Service location -- it just worked with IPX SAP. SLP (at least the little exposure I had with it and Netware 5) was mind-numbingly complicated and often relied on multicast.

      No shortage of addresses, either - IPX gave you 32 bits of network addressing and 48 bits of node addressing.

    • One of my favorite examples that I've been giving since the early days of Zeroconf is this: I have friends who have bought a TiVo Personal Video Recorder, and then liked it so much that they bought a second TiVo for the television in the bedroom. Now what is the problem? At night they turn on the bedroom television to watch a recorded episode of Seinfeld before they go to sleep, but they can't because it is recorded on the other TiVo. Imagine if any TiVo in your house could automatically discover and play content recorded on any other TiVo in your house. Sadly, I'm not aware of anyone from TiVo participating on the Zeroconf mailing list, so it may be a long time before we see anything like this, but I think you'll agree this would be a very cool product.
    Yeah, it might be a long time. Whoops! What's that sitting in my component rack? A ReplayTV? And it's doing what? Streaming video to the unit in my bedroom? Wow, that was quick.
  • by foobar104 ( 206452 ) on Friday July 19, 2002 @03:14PM (#3918897) Journal
    I'm amazed by how many posts there are talking about single-protocol-this and single-protocol-that. My favorites are the ones talking about how having a single protocol leads to licensing fees and restrictions, and the one about how a single protocol is insecure.

    Didn't you losers even read the article? Rendezvous is basically two things: self-assigned link-local IP addressing, and automatic service discovery. In other words, you computer can automatically assign a local IP address to itself, then discover services available on other computers via particular UDP packets. Get two computers in proximity to each other, and they'll be able to ``see'' each other's shared volumes. Get one computer connected (wirelessly or wired-ly) to a printer, and the computer will be able to ``see'' the printer.

    If you ever used Mac OS n (n poof.

    RTFA, indeed.
    • DAMN IT. I previewed and everything. Please replace the last paragraph of the above with the following.

      If you ever used Mac OS n (n < 10) on an AppleShare network, think ``Chooser'' for IP networks. The ``Chooser'' was the greatest network human-computer interface ever. Plug the computer in, and poof.
    • The one starting:
      My hope is that in the future -- distant future perhaps -- your computer will only need one wired communication technology. It will provide power on the connector like USB and FireWire, so it can power small peripheral devices. It will use IP packets like Ethernet, so it provides your wide-area communications for things like email and Web browsing, but it will also use Zeroconf IP so that connecting local devices is as easy as USB or FireWire is today.
      • Now if devices also had drivers built into them that could be downloaded to your os, then I would be all set. How about that, you plug it in and as if by magic, the device installs itself? No drivers or anything.
        • What OS are you using? If it's not Windows, then I'm sure you'd hate this idea if you really thought about it. Companies would only provide drivers for Windows (kind of like they do now), but now there is no way to acquire a driver for the device because, as you say, there are no drivers or anything. That new NIC you bought for your Linux box? Install Windows. Or that new video card? Why, the driver only works in Windows.
    • The "single protocol" is TCP/IP.

      With Zeroconf, TCP/IP applications can discover each other over local links without network configuration.

      So you could have a row of ethernet ports on the back of your machine, and when you attached a keyboard to one of them, the keyboard and _that_ ethernet port would both assign themselves a link-local address, and the keyboard would run a DNS server that advertised that it was a keyboard available at whatever IP address it had selected for itself.

      The key is that link-local addresses are NOT routable accross the internet, and can be claimed by the device without a central arbiter.

      This is pretty close to how USB already works, except it uses TCP/IP addresses and packets instead of USB ones.

      -pmb
      • I thought I understood Rendezvous, but now I'm all confused again. What would be the advantage of using Ethernet to connect your keyboard to your computer? The ``only one plug'' idea is nice, but wouldn't there be a significant cost associated with putting a NIC and a TCP stack in every computer keyboard?

        I had the impression that we were just talking about computer-to-computer communication here, not computer-to-peripheral.
        • Not trying to sound harsh, but there's no need to call people losers. I think you could have calmly explained yourself, and your point would have been just as well taken.
        • What would be the advantage of using Ethernet to connect your keyboard to your computer? The ``only one plug'' idea is nice, but wouldn't there be a significant cost associated with putting a NIC and a TCP stack in every computer keyboard?

          Short answer: no.

          Long answer: There would probably be a cost savings, from using a standard connector, and only needing one codebase to maintain and debug. The complexity of TCP/IP is arguably less than that of USB!

          I had the impression that we were just talking about computer-to-computer communication here, not computer-to-peripheral.

          We are, but what's the difference? Your mouse contains a general purpose computer just as powerful as the computer you owned 10 years ago. For about $1. There's nothing to be gained in cost, complexity, or otherwise, from trying to simplify the design of peripherals just to satisfy some desire that they be subserviant to the host machine.

          -pmb
  • I got some strange error... But hit back arrow and the article came up..

    here it is:
    On Rendezvous, TiVo, and Parliamentary Titles
    An Exclusive Interview with Stuart Cheshire
    Wizard Without Portfolio at Apple Computer & Chairman of IETF ZEROCONF

    Soon after publishing my article entitled Rendezvous: It's Like a Backstage Pass to the Future, an e-mail appeared in my Inbox from some guy named Stuart
    who was very supportive and helped me gain further understanding of a few aspects of the Rendezvous technology. It wasn't until I got to the bottom of
    his e-mail when I noticed his signature. Ha! No wonder he knew so much about Rendezvous - he was Chairman of the ZEROCONF Working Group and was
    employed at Apple! After exchanging a couple of e-mails back and forth, I asked him if I could interview him during Macworld, and to my delight, he agreed.

    Stuart has been involved in a slew of computer science projects for the past number of years. He recently completed his Ph.D. in Computer Science at
    Stanford University, and holds B.A. and M.A. degrees from Sidney Sussex College in Cambridge, U.K. You can learn more about his background from his
    Web site. One of his overriding goals is to make IP networking easier to manage and better suited for use with various kinds of computing devices, which is
    why he has been so instrumental in getting IETF ZEROCONF off the ground.

    Jared: Thanks very much for agreeing to this interview in spite of your busy schedule and travel arrangements. You're actually in Japan at the moment, correct?

    Stuart: That's right. I'm in Japan for the week-long IETF (Internet Engineering Task Force) meeting. The IETF does most of its work via email, but three times a year we get together
    for face-to-face discussion. Generally two out of three meetings are in the United States, but the Internet is a worldwide phenomenon, not just an American thing, so usually at least
    one meeting a year is outside the USA.

    At this minute I'm sitting in the IPv6 Working Group meeting with my PowerBook, answering your questions via AirPort.

    Very cool! Now the ZEROCONF Working Group that is part of the IETF is responsible for developing and maintaining the open-standard Zeroconf networking protocols,
    dubbed Rendezvous by Apple. Can you give us a brief overview of these protocols and the history behind them?

    The initial seeds of Zeroconf started in a Macintosh network programmers' mailing list called net-thinkers, back in 1997 when I was still a PhD student at Stanford. We were
    discussing the poor state of ease-of-use for IP networking, particularly the lack of any equivalent to the old AppleTalk Chooser for browsing for services. I proposed that part of the
    solution might be simply to layer the existing AppleTalk Name Binding Protocol (NBP) over UDP Multicast.

    At the Orlando IETF meeting in December 1998 I discussed this idea with other people, and the following suggestion was made: trying to introduce an AppleTalk protocol into the
    IETF would not be easily accepted, but perhaps the existing IETF DNS packet format is semantically rich enough to hold exactly the same information as I proposed putting into an
    NBP/IP packet. I agreed with this suggestion - there's no need to invent a new IETF packet format just for the sake of it, if there's an existing packet format that can do the job
    perfectly well.

    The IETF is generally populated by people who care very little for ease-of-use, but the Area Directors of the Internet Area were sufficiently far-sighted that they believed that
    improving ease-of-use should be an important priority for the IETF, even though that was very much a minority view back than. Even today, it remains a something of a minority
    view in the IETF. Most IETF people work for router vendors, ISPs, backbone providers, telephone companies, etc., and their focus is wide-area networking. If you work for a
    company that makes routers, you've not going to be very excited about technology that lets computers communicate directly, without needing a router. If you work for a company
    that sells a DHCP server, you've not going to be very excited about technology that lets computers communicate without needing a DHCP server. If you work for a company that
    sells DNS servers, you've not going to be very excited about technology that lets computers communicate without needing a DNS server. I'm sure you get the point.

    Despite this lack of general enthusiasm, the Area Directors went ahead and arranged a preliminary "Birds of a Feather" session (BOF) to discuss these issues, under the name
    "Networking in the Small" (NITS). We had two NITS BOF meetings, in March and July of 1999. Peter Ford from Microsoft helped me co-chair those meetings, and we gathered
    enough interest to warrant the formation of an official IETF Working Group, under the new name "Zero Configuration Networking", in September 1999. At that time, Erik Guttman
    from Sun volunteered to co-chair the new Zeroconf working group with me, and he has been invaluable in helping keep the work on-track and moving forward for all this time.

    The Zeroconf working group identified four requirements for "Zero Configuration Networking":

    1. Devices need an IP address.
    2. Users need to be able to refer to hosts by name, not by address.
    3. Users need to be able to browse to find services on the network.
    4. Future applications will need to be able to allocate multicast addresses.

    IPv6 already has self-assigned link-local addresses, but IPv4 did not, so we produced a specification for how IPv4 devices can obtain self-assigned link-local addresses.

    For name lookup, we have general agreement that DNS-format packets sent via IP Multicast are the right solution.

    For browsing, I worked out how to do the thing that was suggested to me back in 1998, and wrote a draft called "Discovering Named Instances of Abstract Services using DNS"
    (draft-cheshire-dnsext-nias-00.txt) which specifies how to do network browsing using just DNS-format query and response packets.

    These specifications provide what we need to make dramatic ease-of-use improvements for local IP networking. However, these solutions do remain controversial with some IETF
    participants. Although Apple is already shipping Rendezvous, we are continuing to work in the IETF Zeroconf Working Group to continue the ongoing development of these
    protocols. Apple's intent is that as the protocol specifications continue to improve as a result of helpful intelligent discussion in the IETF community, we will be updating our
    Rendezvous implementation to benefit from those improvements. This is a fairly normal state of affairs - most communications protocols continue to evolve and improve over time,
    and good companies have an ongoing process of updating their implementations to benefit from those improvements.

    Speaking of Apple, in addition to your role as Chairman for the ZEROCONF Working Group, you are also Wizard Without Portfolio at Apple Computer. I confess I have
    no idea what that means, so what exactly is it that you do at Apple?

    It is a pun on the British parliamentary title, "Minister Without Portfolio", which is like "Senator at Large" in the USA, meaning someone with general responsibilities, not restricted to
    one particular area. For me, what it means is that I try to make sure I'm always looking at the big picture, not limiting my thinking to one particular narrow field.

    Ah, I see. Steve Jobs at the Macworld keynote yesterday gave Rendezvous a significant spotlight during his Jaguar presentation. In particular, he demonstrated the
    integration of Rendezvous into iChat, iTunes, and several network printers soon to be released by Epson, HP, and Lexmark. This definitely proves that Rendezvous is
    just as useful for bringing plug-and-play, zero-configuration networking services to software and hardware devices as it is useful for networking computers
    themselves. Can you elaborate further on what kind of functionality Rendezvous brings to software and hardware devices that simply hasn't been possible in the past?

    I can't comment on specific Apple product plans, but I think you had some very interesting ideas in your "Backstage Pass to the Future" article. Rendezvous is not just about making
    current networked devices easier to use. It is also about making it viable to put networking (i.e. Ethernet) on devices that today use USB or Firewire, and it is also about making it
    viable to use networking in areas that you wouldn't have even considered before Rendezvous. Imagine a future world where you connect your television and amplifier and DVD player
    with just a couple of Ethernet cables, instead of today's spaghetti mess of composite video, S-Video, component video, stereo audio, 5.1 Dolby, Toslink optical audio cables, etc.

    One of my favorite examples that I've been giving since the early days of Zeroconf is this: I have friends who have bought a TiVo Personal Video Recorder, and then liked it so much
    that they bought a second TiVo for the television in the bedroom. Now what is the problem? At night they turn on the bedroom television to watch a recorded episode of Seinfeld
    before they go to sleep, but they can't because it is recorded on the other TiVo. Imagine if any TiVo in your house could automatically discover and play content recorded on any other
    TiVo in your house. Sadly, I'm not aware of anyone from TiVo participating on the Zeroconf mailing list, so it may be a long time before we see anything like this, but I think you'll agree
    this would be a very cool product.

    So you would say Rendezvous delivers almost FireWire-like ease of use for networked devices?

    I would go further than that. My long-term goal, from before I even started at Apple, is to eliminate the need for disparate incompatible technologies on your computer. Right now
    your computer may have SCSI, Serial, IrDA, Bluetooth, USB, Firewire, Ethernet and AirPort, all communication technologies that each work a different way.

    My hope is that in the future - distant future perhaps - your computer will only need one wired communication technology. It will provide power on the connector like USB and
    FireWire, so it can power small peripheral devices. It will use IP packets like Ethernet, so it provides your wide-area communications for things like email and Web browsing, but it will
    also use Zeroconf IP so that connecting local devices is as easy as USB or FireWire is today. People ask me if I'm seriously suggesting that your keyboard and mouse should use the
    same connector as your Internet connection, and I am. There's no fundamental reason why a 10Mb/s Ethernet chip costs more than a USB chip. The problem is not cost, it is lack of
    power on the Ethernet connector, and (until now) lack of autoconfiguration to make it work. I would much rather have a computer with a row of identical universal IP communications
    ports, where I can connect anything I want to any port, instead of today's situation where the computer has a row of different sockets, each dedicated to its own specialized
    function.

    Well it sounds like you're working on some wonderful ideas and technologies there, and I'm very excited to see Apple adopting Rendezvous so readily for Jaguar. I'm
    certain this will prove to be an important milestone in the evolution of networked computing. Stuart, thanks again for being here, and I wish you the best of luck in your
    dual roles as ZEROCONF Chairman and Wizard Without Portfolio at Apple!

    Thank you Jared.
  • Before encountering this article I had no idea in the world what Zeroconf was.

    Here's their web site. [zeroconf.org] They give a description on the main page.

  • What zeroconf is not (Score:5, Informative)

    by SideshowBob ( 82333 ) on Friday July 19, 2002 @03:41PM (#3919037)
    Zeroconf is not a protocol for letting devices talk to each other. As others have already pointed out, there are already protocols which are far better at doing that.

    Zeroconf is a protocol for letting devices discover that other devices exist -- without requiring a human to explicitly tell each device.

    Don't think that Zeroconf is trying to replace anything.

    As for IPv6, true it already has link-local addressing. Thats 1 of the 4 things Zeroconf does. The auto-discovery of *other* devices isn't built into IPv6.
  • i think getting all your device (even your micro wave ;) ) will be well done and easy. far easier to have one type of cable and protocol to communicate with each computer device and configure them. some problems are: - speed: with ethernet, we are limited to Gigabit (or do you prefer optical everywhere ...) compare to firewire future (1,6 Gbps to 3,2) - wireless speed: same problem but also battle beetween bluetooth and 802.11b. on the fact, we risk a poor security because there is one protocol, i said it will increase ! more programmers on one protocol and ONE recognized and universal implementation for all systems Linux, Windows, MacOS X, *BSD, etc... so we are sure they can interoperate and are securely connected ... but well, when could i go in the future :-)
  • So Zeroconf is a protocol for letting devices discover that other devices exist -- without requiring a human to explicitly tell each device.

    By Jini, I think he's got it.
  • Stuart, the author of the addictive Mac tank game Bolo, granted permission several years ago for Windows and Linux versions of the popular game. By the time these were released, Mac Bolo was already out of development and they never really took off in popularity, but they are still developed and available.

    The Linux client uses SDL, the Windows client may, too.

    Get them here [winbolo.com]

    sm
  • Here's an idea I had. Why instead of inventing new "standards" like Firewire and USB don't we just use ethernet for everything? Network cards are really cheap these days, and integrated into the motherboard.

    To avoid requiring a mouse implement a TCP/IP stack another protocol could be used, which would make it impossible from it to escape from the LAN. Then we'd plug the computer into the hub, and the mouse, printer, keyboard and everything else too.

    Of course this would be annoying if the network was really busy, but the motherboard could always have a second ethernet connector, and if needed a second hub could be used. The advantages would be greater speed and possibly cheaper prices as network cards cost next to nothing.

    Also, wouldn't that be a great way of troubleshooting devices? No more hours spending if a device doesn't work because it's broken of it's not supported. Just start ethereal and watch if it's sending anything.
    • You pretty much summarized it, excpet the part where you proclaim that a whole different networking stack should be created to "make it impossible from it to escape from the LAN."

      Mmmmkay.

      -pmb
      • That was not my main point. I just think it'd be a bit silly to implement something as complex as a TCP stack in a mouse. It should work perfectly with something much simpler. Maybe simple devices would use UDP and the more complex ones TCP.

        The part about a different protocol was just because I think it's more logical that way. Do we really want every user to configure a firewall to block all the mouse traffic from reaching the internet? It'd me much easier to have a new protocol and make routers block it by default. Also I can't imagine a situation where it would be useful to have my keystokes reach Australia (I'm Spain)
        • I just think it'd be a bit silly to implement something as complex as a TCP stack in a mouse.

          What do you think of the complexity of USB? I think USB is even more complex than TCP/IP.

          Do we really want every user to configure a firewall to block all the mouse traffic from reaching the internet?

          You wouldn't need to. The mouse and the port it was connected to would both allocate themselves link-local IP addresses, which are not routable on the Internet.

          You really should read up about Zeroconf before dismissing it as too complicated or lacking in security. It is neither.

          -pmb
    • Because ethernet isnt deterministic (due to collisions you dont know when something will or will not be sent. In order to get around that you would need an expensive switch (one with more crossbar throughput than the total data load). Also you would probably need to do MAC address switching with partitioning in order to make each device work painlessly. This then requires much memory and overhead on the behalf of the switch. Lastly, ethernet is big relatively. USB chips cost less than a buck each (less than normal serial uarts, thus the reason for widespread adoption).Plus USB has a standard power over cable implementation, and so does firewire. Ethernet has many different implementations of power over network cable, so you would have to use compatible hubs and devices for each manufacturer. So, as everyone has said before, there is a place for each protocol.
      • Because ethernet isnt deterministic (due to collisions...

        Full duplex ethernet doesn't need CSMA/CD.

        Lastly, ethernet is big relatively. USB chips cost less than a buck each

        "Big relatively"? Nope, they cost the same. Some hardware vendors (that design their own chips), even combine all of the logic for FireWire, USB, Ethernet, & ATA, all onto the same one chip.

        -pmb
  • 1 cable - It's the epitome of Apple.
    Pretty soon even the power will be USB.
  • Here are some links from the IETF website: (documents also mentioned in the interview)

    Zero Configuration Networking (zeroconf) [ietf.org]
    Zeroconf IP Host Requirements [ietf.org]
    Dynamic Configuration of IPv4 Link-Local addresses [ietf.org] [ietf.org]

  • by sfgoth ( 102423 ) on Friday July 19, 2002 @07:14PM (#3920161) Homepage Journal
    The "Single Protocol" Stuart is talking about is TCP/IP, not some newfangled daydream.

    -pmb

"In my opinion, Richard Stallman wouldn't recognise terrorism if it came up and bit him on his Internet." -- Ross M. Greenberg

Working...