Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Hacking Apple Build

Wozniak Accepts Post At a Storage Systems Start-Up 183

Hugh Pickens writes "Apple co-founder Steve Wozniak is going back to work as chief scientist at Fusion-io, a start-up company that tweaks computers to let them tap vast amounts of storage at very quick rates. In the early days of Apple, Wozniak stood out as one of Silicon Valley's most creative engineers, demonstrating a knack for elegant computer designs that made efficient use of components and combined many features into a cohesive package and Wozniak will do similar work at Fusion-io, although this time with larger server computers and storage systems rather than PCs. 'I have a pretty quiet life, and I like to watch technology evolve,' says Wozniak. 'In this case, I like the people and the product, and said I would like some greater involvement.'"
This discussion has been archived. No new comments can be posted.

Wozniak Accepts Post At a Storage Systems Start-Up

Comments Filter:
  • by larry bagina ( 561269 ) on Wednesday February 04, 2009 @11:45PM (#26733327) Journal
    The article is from the NY Times.
  • Re:My Hero! (Score:1, Informative)

    by Anonymous Coward on Thursday February 05, 2009 @12:03AM (#26733457)

    How was any of the crap Apple put out "ahead of its time?"

    iirc, the Commodore 64 and subsequent Amiga were much better products.

    The NeXT was the first decent system from that camp.

  • Re:My Hero! (Score:5, Informative)

    by bhtooefr ( 649901 ) <[gro.rfeoothb] [ta] [rfeoothb]> on Thursday February 05, 2009 @12:16AM (#26733553) Homepage Journal

    OK, let's look at what was available in 1976, when the Apple-1 came out.

    In single-board computers, which the Apple-1 was... there was, what, the KIM-1? Amazingly primitive compared to the Apple-1.

    In backplane computers, there were the S100 bus machines, which cost significantly more to do what the Apple-1 could do with one board.

    Now, for the C64... it came out in 1982, no? Of course some features are going to be advanced beyond what the Apple II could offer at the time. Keep in mind, though, that the Apple II was still quite competitive against the C64.

    The Amiga... I'm not gonna dispute that it had better hardware than the Mac. (Although, the Mac arguably had a more intuitive UI.) But, I will use the Apple IIGS, which had by far the best sound chip of anything in its time. (Yes, I'm fully aware that this sound chip was gimped by not offering stereo sound without an add-on board. But still.)

    Plus, the Apple II did offer quite a lot of expansion, which is something that many of its competitors lacked (or didn't do as well.)

  • The "specialized BIOS" would be a ROM on the card itself - you can boot off of a PCIe SATA or SAS controller, just like you can boot off of a PCI PATA or SATA controller, just like you could boot off of an ISA ST506 or PATA controller.

  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Thursday February 05, 2009 @12:49AM (#26733791) Homepage

    If by "specialized BIOS", you mean "storage controller firmware on the card", then yes.

    This is hardly different from any SCSI or SATA controller on the market, only this one has the "disk" built-in. When the system is POSTing, it triggers every device's initialization routine, which is where a disk controller can let the BIOS know it has (bootable) disks up for grabs.

  • by HockeyPuck ( 141947 ) on Thursday February 05, 2009 @01:00AM (#26733847)

    Let me translate this for you...

    These are "LAN Solutions"
    "SCSI-over-IP" - iSCSI
    "RAID-over-IP" - some volume manager sitting on top of iSCSI

    "WAN Solutions":
    WAFS (Wide Area File Services) from the likes of Cisco or Riverbed. They optimize CIFS/NFS protocols which are horrible over high latency links.

    Infiniband... Dying... besides infiniband used SCSI over IB to a IB to FibreChannel gateway.

    Don't forget tape and our friend FICON.

    Where can he be flexible? In the past few years we've seen the adoption of:

    -Virtual Tape Libraries (tho they've been in the mainframe world for ages)
    -Deduplication in Hardware
    -Encryption of Data at Rest (in the tape drive; and now in the disk drive)

    We've got plenty of CPU power with multi core systems... what about using that for Compression? (Sorry StorageTek did that in the 80s on their Iceberg (aka IBM's RVA Subsystem).

    I don't need more capacity, I need to be able to manage it easier.

  • by symbolset ( 646467 ) on Thursday February 05, 2009 @01:13AM (#26733909) Journal

    In other words, Yet Another Half-Baked Clustered/Distributed Filesystem we can add to the list of dozens of failed distributed/clustered filesystems.

    Um... not even close?

    This isn't a clustered/distributed anything. It's also not "virtual".

    It's a very real, very fast, local storage for very real computers - servers mostly, but if you've got a few grand to blow on an extreme gaming rig, why not go the extra bit to make your levels load faster?

    Their quoted numbers are per PCIe X4 device >100,000 IOPS and >640MB/s both reading and writing, and they have independent benchmarks back that up. They're not kidding. The game has changed. This changes everything about how traditional workloads are configured, when you use a SAN vs local disk, how much throughput your apps can get, how many VMs you can run in a server... basically everything in the server world except where you store the data. You still want to store the data in the SAN for redundancy reasons.

  • by symbolset ( 646467 ) on Thursday February 05, 2009 @01:28AM (#26733989) Journal

    The expected lifetime on the Intel X25-e is about 24 years in an enterprise server. The products of the company in TFA likewise. Use of SLC, sparing, internal error detection and correction, wear levelling and virtual block addressing add up to devices that are not only ridiculously fast - they also last a long time and degrade gracefully [fusionio.com] (pdf).

    Both the Intel SSDs and the IODrive are internally massively parallel.

  • by SQL Error ( 16383 ) on Thursday February 05, 2009 @01:38AM (#26734037)

    We have these in our production servers right now. They really deliver. They seem to top out at around 60,000 IOPS with EXT3 (the 100K figure was with XFS) but I've hit close to 800MB/s on sequential transfers.

  • by Hal_Porter ( 817932 ) on Thursday February 05, 2009 @01:56AM (#26734095)

    It would have an Option ROM, like RAID cards and every other bootable controller does
    http://en.wikipedia.org/wiki/Option_ROM [wikipedia.org]

    Not using a SATA interface should yield a good performance advantage.

    Rock on, Woz

    You could have an option Rom, or you could just emulate AHCI (or even ATA) in hardware up to the point the OS loads a native driver, and switch to native mode after that.

    Actually I sort of wonder if you couldn't implement an AHCI contoller which talks to flash directly. The bottleneck in SATA is the drive and the SATA bus, not the PCI Express AHCI controller. PCI-E x16 can manage 4,000 MB/s compared to SATA2's 300 MB/s. SATA2 has plenty of bandwidth for a hard disk, but it looks like it will become a bottleneck with an SSD with lots of flash chips running in parallel. In fact an 2.5 inch Intel extreme SSD manages 250MB/sec now, pretty close to the SATA limit. A PCI Express card covered in NAND flash aimed at enterprise servers could easily be more parallel than this.

    AHCI is quite flexible (it has efficient NCQ for example) and is already supported by all current OSs and Bioses. There's no reason why you couldn't design a wide flash array on a PCI express card that looks like a fast drive behind an AHCI controller to software.

    The upside to this is that there is no device driver and option Rom to develop/support.

  • by moosesocks ( 264553 ) on Thursday February 05, 2009 @02:38AM (#26734255) Homepage

    There is no shortage of opportunity. However, as with the early home computer market, there is a shortage of consensus on what a storage system actually does, other than "store stuff". That seems to be a world Wozniak does well in - the lack of standards meant the Apple II did well, the presence of standards meant that NeXT didn't. In the current computing world, where standards are everything (especially if they come with pretty holographic stickers), can he do much with the flexibility in the arena?

    I always thought that the Apple ][ did well because it was cheap and versatile, and that NeXT failed because their machines were outlandishly expensive and proprietary.

  • by Anonymous Coward on Thursday February 05, 2009 @04:48AM (#26734765)

    > 120GB SSD, which will make a given PC perform like something completely different

    I take it you've not actually used one of those pieces of garbage yet. My boss bought a dozen of them for our devs, and every single one of the devs has since rejected them. While the read speed and the write speed of the SSD's aren't bad, they're slow as crap when you mix small writes with reads. You know like you do with real world systems like compiling software and with certain database usage patterns. To do a small write, the Flash RAM has to read the entire block into memory and then write the entire block back. With our web service project written in C#/.NET, the compile time increased from just over three minutes with a SAS drive to over nine minutes with one of those SSD pieces of crap.

  • by sarabob ( 544622 ) on Thursday February 05, 2009 @05:14AM (#26734831)

    Which is why fusion-io is different from normal SSDs. The devices have 20% or more spare capacity and use a log-based FS with block mapping, so your writes don't go through the read/erase/rewrite cycle.

    Obviously there is a little slowdown once the 20% has been used up and it goes into garbace-collection mode, but there are plenty of white papers around about steady-state usage (ie once it has started GC) and you can opt to use even less of the physical capacity in order to get more performance. See http://www.oracle.com/technology/deploy/performance/pdf/OracleFlash15.pdf [oracle.com] for example.

  • by Anonymous Coward on Thursday February 05, 2009 @08:04AM (#26735511)

    Not this crap again.
    The 100k writes is per block.
    If a block fails, it just doesn't get used again, and the chip continues working.
    Similarly, it's easy to say that once you've hit 100k blocks, you don't write to that block ever again.

    There's a metric fucking shit-ton of blocks in today's SSDs, and they will last longer than the warranty on your slow, noisy, power hungry mechanical harddrive.

    What's that? You use harddrives that are out of warranty?
    LOL.

  • Actually, the Disk ][ was arguably a bigger achievement than the Apple ][ itself, and Woz designed that, too, with no knowledge of how storage worked at the time.

  • by afidel ( 530433 ) on Thursday February 05, 2009 @10:55AM (#26737119)
    Um, get better SSD's then. Intel x25-e does ~70K 4K 100% random writes with the SATA controller on my HP workstation, you need a very large array of traditional disks fronted by a great controller to match that.
  • by compro01 ( 777531 ) on Friday February 06, 2009 @11:07AM (#26751877)

    I don't see how your link shows I'm dead wrong. I said RAID-0 was good for bandwidth (transfer rates), as opposed to access times (which for hard drives is merely a function of distance and velocity with an obvious maximum and in SSDs a constant, so RAID-0 would do practically nothing aside from maybe increase it by an irrelevant amount, due to processing overhead.) and your link shows exactly that in their HDTach benchmarks on page 3, though they also show that the real world (or at least the world according to sysmark) difference in performance is marginal.

To do nothing is to be nothing.

Working...