Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology (Apple) Businesses Apple Technology

iSCSI for Mac OS X? 60

CoffeePlease asks: "Is anyone aware of development going on for iSCSI drivers for Mac OS X? I really need this but it's only out for Windows and Linux so far. I can't use the Linux drivers - they might run, but only as a command-line process, and I need other software to recognize the drives."
This discussion has been archived. No new comments can be posted.

iSCSI for Mac OS X?

Comments Filter:
  • Not to my knowledge (Score:5, Informative)

    by Matthias Wiesmann ( 221411 ) on Tuesday February 18, 2003 @12:43PM (#5326990) Homepage Journal
    • There is no mention of iSCSI on apple's site, and I never heard of apple supporting iSCSI.
    • You cannot run Linux drivers on Mac OS X, drivers is one area where Darwin is very different from Linux. You might have more luck with BSD drivers.
    • Running the drivers as command line makes little sense: drivers don't have much interfaces. You might need the command line to start or install drivers.
    • Assuming an iSCSI driver existed for Mac OS X / Darwin, then the system could see a remote device and handle it. Assuming the file-system on this device would be supported by Darwin then all applications, with or without GUI would 'see' this file-system.
    • by Draoi ( 99421 ) <draiocht@@@mac...com> on Tuesday February 18, 2003 @12:50PM (#5327060)
      You might have more luck with BSD drivers

      Nope, unfortunately. File system drivers for MacOS X would have to be written as a kext and would be IOKit-based. Totally un-BSD ...

      My first point of call would be the Darwin-Drivers [apple.com] mailing list and archives.

      • Incorrect. Although device drivers are IOKit-based, the VFS layer of darwin (filesystems) is almost straight FreeBSD. There are problems with the fact that Darwin's locking is different and probably VM issues. You definitely can't just take a FreeBSD filesystem and throw it into Mac OS X but it would only be a porting effort, not a rewrite.
      • by mkldev ( 219128 )

        File system drivers for MacOS X would have to be written as a kext and would be IOKit-based. Totally un-BSD ...



        If it were a file system, you would be wrong (since the VFS layer is basically BSD), but it isn't a file system; it's a block device. So yes, it would be an I/O Kit KEXT.

        However, to say that it's "totally un-BSD" is a stretch. BSD drivers are relatively easy to port to Mac OS X if they are written correctly. The wrapper tends to be relatively small, with additional changes needed for synchronization where applicable.

    • It may not be Mac OS X or Darwin-specific, but it seems that Intel is working with Wasabi Systems/NetBSD [open-mag.com] to work on getting iSCSI to run on NetBSD. Some of this may work up to Darwin, then into Mac OS X... but it will take a while, mostly when Apple is focusing on Fibre Channel with their XServe RAID units.
    • are you saying Apple doesn't support iSCSI? how is that possible? It starts with an "i" followed by a capital letter? How could they not support it?! Help, my world is collapsing around me!
  • by Anonymous Coward
    They just just just agreed on the final technical specs and most certainly haven't gotten to officially ratifying those specs as the standard yet.
  • iSCSI on Linux... (Score:4, Informative)

    by molo ( 94384 ) on Tuesday February 18, 2003 @12:54PM (#5327098) Journal
    Looks like there is a project that is implementing it as a kernel-level driver (as it should be, IMO). It exports the device as /dev/sda (etc.). It certainly seems to be in active development.

    http://linux-iscsi.sourceforge.net/
    • Looks like an iSCSI client. How about an iSCSI server?

      Joe
      • The disk drives, block serving NAS boxes, protocol bridges, disk arrays, etc. are your iSCSI servers.

        You normally wouldn't run a iSCSI server on a server box, better to run some type of file server on those.
        • Wouldn't you run iSCSI as a server process for the same reasons you would run a network block device (nbd) server?

          The NBD stuff looks pretty fragile; I was hoping someone would build a more robust iSCSI server for Unix.

          Yah, I know. If I had the time, I would work on it.

          Joe
  • oh, whatever (Score:4, Informative)

    by Twirlip of the Mists ( 615030 ) <twirlipofthemists@yahoo.com> on Tuesday February 18, 2003 @01:14PM (#5327267)
    Nobody "really needs" iSCSI. iSCSI isn't real yet. It's still one of those "coming soon" things, like Infiniband. And we saw how well Infiniband worked out.

    iSCSI is just another way of solving a problem that's already been solved in any number of other ways. You need to attach a computer to some storage. Okay. You can use direct-attach FireWire storage. That has the advantage of being absolutely bullet-proof. Or you can use Fibre Channel to attach to a switched fabric. That works fine, too; just present a LUN to the Mac and let it format and mount it. Or you can use a network storage technology, like AppleShare or NFS. Those work fine, too, and the Power Macs, PowerBooks, and xServes are all shipping with 1000BASE-T, so that's not a problem.

    There are any number of ways to ameliorate your so-called "real need" for iSCSI. These work today. Use them.
    • Re:oh, whatever (Score:5, Interesting)

      by aderusha ( 32235 ) on Tuesday February 18, 2003 @01:55PM (#5327583) Homepage
      1) firewire - no managment, just loose drives attached to single machines. might as well suggest a usb memory stick. firewire drives don't make a san.

      2) fibre channel - cost of entry approaching $50k. that adds up to about 50k reasons not to use it on a home machine or small network.

      3) network storage - not really a block level disk access technology, is it?

      i think the real reason is that very few people are using macs in a data center serving up real applications to lots of clients - the sorta place where a well managed SAN makes sense. now that the draft standard has been finalized (but not ratified), i imagine that you'll see iSCSI becoming more commoditized and more software being made available for more OSs.

      note that the windows and linux software packages are only iSCSI initiators - i haven't seen any software based iSCSI targets. this means that even if you did port the code to Darwin you'd still have to have some storage device out there speaking iSCSI to point your mac at.
      • Re:oh, whatever (Score:3, Interesting)

        by gerardrj ( 207690 )
        Eccept that in my experience most Datacenters are migrating to net attached storage. People are tired of SANs, the high chost of parts and maintenence, the difficultly and expense of backing up, etc.
        Sure... it SOUNDS great to have 5TB of storage in one unit, but just exactly how do you keep current off-site backups? Oh... that's right... you maintain another 5TB unit in another location and run a dedicated T3 between them, yea... THAT'S affordable.

        The SAN was a great idea, Fiber Channel was a great idea, but it never reached critical mss, and now distributed network storage is taking over. iSCSI will propbaly make some inroads, but it will never replace a simple device with network ports actining as a server. The latter is cheap, esily understood, easily maintained, provides 95% of the functionality necessary to any IT department, and the clients are built in to every major OS on the market.
        • Re:oh, whatever (Score:1, Interesting)

          by Anonymous Coward
          There are a lot of apps, though that demand disk-level access to things. Sure, you could push a lot of things to DFS (in the windows world)--even, conceivably a nice system like an XRAID. But for systems that demand a disk level access, such as SQL, Exchange, etc., a SAN or Server attached storage is the only way to go now.

          iSCSI would change that, and bring the power of SAN-type storage to a much better budget point. It's true, SANs are too limiting and too finicky and way way too expensive. That's by the XRAID looks appealing and would be more appealing if an attached XServe could serve up the disk space as an iSCSI drive.

          I'm less interested in seeing the proliferation of iSCSI clients, and much more interested in the proliferation of iSCSI target software. It'll make storage that much more flexible.
          • Re:oh, whatever (Score:3, Insightful)

            by gerardrj ( 207690 )
            But neither SQL nor Exchange require disk level access. Neither app has special version for ATA, SCSI, SAN or NAT storage space. VERY few apps actually require phyical device access (sending commands directly to hardware instead of through the OSI abstractions), or even block level access.
            The few that I can think of are disk repair an maintenance utilities.
            Running SQL servers over NAT, NFS or any other sort of "soft" mount works just fine.
            • Not true. Exchange 2000 will only run on direct attached storage. I'm not sure about SQL server. It will only run on NTFS, CIFS will not work.
              • Which is one of the major reasons that people have not been migrating to E2K as quickly as MS would have liked.

                But the still... exhange is NOT accessing the disk directly, MS just did a brain-dead thing and forces E2K to use the DASD storage stack instead of the one level higher at the "volume" level where total abstraction is possible. One major reason MS did this was to lock people in to their proprietary Microsoft Cluster Server solutions and higher licensing fees for those components. There is no technical reason the E2K must use local storage.
                At least that'm my understanding of the state of things.
        • This doesn't fully match with my experience and I work for HP in their storage software division... fortune 50, 100 and many 500 are using and or starting to setup storage networks based on Fibre Channel (not much iSCSI yet).

          Most of the major data centers in the world are using FC interconnects and large storage arrays (5+ TB per array, the bigger (in capacity not physical size) the better generally). The trend is towards putting the data in storage arrays external from the servers, think about blade servers for example.

          So in other words toward using storage networks regardless if they are based on FC interconnects, iSCSI, etc.

          ---

          Also not what your point is about T3 and 5TB replicated storage... Many companies do that with far larger amounts of storage using metro fiber not T3s. This is not an Fibre Channel issue but a data desaster recover issue.
      • Re:oh, whatever (Score:3, Informative)

        by Anonymous Coward
        ------
        1) firewire - no managment, just loose drives attached to single machines. might as well suggest a usb memory stick. firewire drives don't make a san.
        ------

        no, firewire drives can be attached to many machines
        at the same time. there -are- firewire san solutions out
        there right now.

        ------
        2) fibre channel - cost of entry approaching $50k. that adds up to about 50k reasons not to use it on a home machine or small network.
        ------

        no, have a look at Apple's Xraid box. Much cheaper than
        $50k.

        ------
        i think the real reason is that very few people are using macs in a data center serving up real applications to lots of clients - the sorta place where a well managed SAN makes sense. now that the draft standard has been finalized (but not ratified), i imagine that you'll see iSCSI becoming more commoditized and more software being made available for more OSs.
        ------

        I think the main reason that macs are doing that job
        is that there haven't been any mac capable of doing
        that job.
        • Re:oh, whatever (Score:2, Informative)

          by Anonymous Coward
          The 50K number isn't off the mark really. The $12,000 for XRAID buys you just two arrays attachable to two servers. If you want to put the XRAID behind one or two (for redundancy) fiber channel switches. Those run as much or more than the XRAID itself.

          One of the benefits of iSCSI would be that the very pricey FC Switched network would be unecessary -- you could leverage your LAN.
      • Perhaps CoffeePlease could be OK for now with something simpler.

        Set up NetBoot on the Macs, and export the file system from whereever the hell you want.

        It isn't the perfect solution, but it might just get you what you need in the meantime.
      • Re:oh, whatever (Score:3, Informative)

        by benh57 ( 525452 )
        Your fibre channel numbers are way off. You can get an XServe RAID Fibre Channel and 2.5TB for $10999. The FC host PCI card from apple is $499. 400MB/s throughput.
        • I got a almost new quality 16 port Brocade 2800 switch of off eBay, fully load with GBICs for around 4k, add on about 400-600 per host for adapters, and 5k-11k for something like Apple's xRAID you can get into a FC SAN for much less then 50k.

          If you want 2gig fibre channel it would cost you of course more for the switch (2-3x currently).

          Not that it is a cost effect thing to do for small SANs... iSCSI isn't that cheap either but in theory you do save on at least the switch costs (most companies are making iSCSI adapters instead of normal NICs for performance reasons).
      • 1) firewire - no managment, just loose drives attached to single machines. might as well suggest a usb memory stick. firewire drives don't make a san.


        I don't know what you mean by that, but there is some pretty cool firewire stuff over here that looks to be a bit better than a usb memory stick [micronet.com].

        -- james
        • from the link you've provided, i find one SAN machine - the sancube [sancube.com], which actually makes a SAN on firewire.

          except the max capacity is 720GB and it only supports a maximum of 4 hosts.

          hardly a SAN...
        • Firewire is great - I love it. The drives are dirt cheap. Firewire connections are fairly trouble free, except on windows. But not really fast enough for working directly with the drive in finalcut pro.
      • IEEE1394B is peer to peer. SAN management is a job for a user space tool. Fiber channel is not a solution because of the immense expense of the hubs, transceivers, enclosures, et cetera, only the people who have already bought a complete SAN can afford it :P

        Firewire makes the most sense; 128 devices per channel (if one counts "the" controller, 127, but 1394B is peer to peer so let's just call 128 what it is, the number between 127 and 129) and with speeds of 3.2Gbps in limited implementations today, and 1.6Gbps more readily available (and on copper, 3.2 requires fiber at this stage.)

        The big missing piece for replacing IDE, SATA, Fiberchannel, and all flavors of SCSI except Ultra320 (which is still faster than the fastest 1394) is 1394-native hard drives. Why is no one making these? It would seem that there is a market, if for no reason other than simply lowering the cost of 1394-attached peripherals.

      • You talk about the alternatives being too expensive to use at home, but also that nobody uses Macs in a data center... It seems that you're just throwing things around.
        • actually, i think it's very consistent: given that a) he's asking "ask slashdot" instead of somebody like EMC, it's probably not for a major data center. and b) there aren't a hell of a lot of datacenters out there hosting large scale mac server installs (yet).

          which is why iscsi would actually be nice on a mac, as a software implementation would probably be cheap and would utilize commodity hardware and would be totally accessible for the home user. i manage a number of intel systems attached to an emc SAN at work, and i'd love to be able to implement something similar at home myself, which has me watching the emerging iscsi standard very closely for these same reasons - i just don't have the quid to drop a symmetrix in at home...
      • first:
        ...about 50k reasons not to use it on a home machine or small network.
        then:
        ...very few people are using macs in a data center serving up real applications to lots of clients - the sorta place where a well managed SAN makes sense.
        uh, which one is it, then? if you're working in a real data center, you're presumably not in your home.
        and, not incidently, i've still yet to have it explained to me why a block level network storage system is a good thing as compared to a network file system (although not NFS particularly), other than for developers who can't wrap their minds around any model that doesn't involve every PC having a "disk".
    • Um - I kinda do need it. I have been given an account on an iSCSI server, and will be given the drivers on my PC next week. Unfortunately the machine I could really best use it on is the Mac - where I do most of my video work.
    • We're not dead yet! (Score:1, Interesting)

      by Anonymous Coward
    • iSCSI is just another way of solving a problem that's already been solved in any number of other ways.
      absolutely right. and not even an improvement. i'd say quite the oposite. what's so great about the SCSI command set that makes people think it'll be such a wonderful networked protocol? there's lots of things it doesn't do that you'd like a network protocol to do. presumably many of these are addressed by the "i" up front, but why do this stupid layering? is elegance totally lost on modern programers?
      this all doesn't even get into the question of whether a block-level network storage system is a good thing. can someone explain to me why it's an improvement over a good network file system? and please don't talk about problems with specific network file systems. we all know NFS sucks.
  • Obligatory comment on why command-line is more powerful, and why wouldn't you want that...

  • by hmccabe ( 465882 ) on Tuesday February 18, 2003 @02:03PM (#5327630)
    it's called iSCSI and it's on every platform but Apple's? It seems a bit like Apple naming rendezvous GnuIPdetect.
    • by Anonymous Coward
      I think most of the people here are very confused about what iSCSI is. It is a protocol that will bridge IP and SCSI. Basically, this means that if you purchase an iSCSI device you can attach that device to your network and attach your SCSI devices to the other side of the device. Now your SCSI devices look like network devices such as harddrives or the like.

      iSCSI has NOTHING to do with Apple. It is just the name of the IP/SCSI protocol...
      • by porkchop_d_clown ( 39923 ) <mwheinz&me,com> on Tuesday February 18, 2003 @09:27PM (#5331691)

        As some writing SRP drivers for a living - iSCSI is a protocol that allows you to send SCSI commands between to machines linked by TCP/IP. It doesn't "bridge" IP and SCSI - it's not like you can use it to ping your hard drive.


        The intent of iSCSI is to allow people to build SANs without having to shell out actual money for a fibre channel installation.

    • actually, get this right. rendezvous is based on (and quite heavily supported) by the zeroconf working standards group [zeroconf.org]. yes, it can even work on winblows, theoretically speaking, of course.
  • What on earth do you want to use this for?
  • I am about to finish work on a Fibre Channel driver for Mac OS X and QLogic's SANBlade family of adapters. Just waiting on my Apple xRAID to show up so I can finish testing (my current test array is toast, aka expensive brick).

    Anyway I have been looking at writing or extending the driver to support QLogic's iSCSI capable adapters.

The gent who wakes up and finds himself a success hasn't been asleep.

Working...