[gpfsug-discuss] Anybody running GPFS over iSCSI?

Aaron Knister aaron.knister at gmail.com
Sat Dec 15 14:44:25 GMT 2018


Hi Kevin,

I don’t have any experience running GPFS over iSCSI (although does iSER count?). For what it’s worth there are, I believe, 2 vendors in the FC space— Cisco and Brocade. You can get a fully licensed 16GB Cisco MDS edge switch for not a whole lot (https://m.cdw.com/product/cisco-mds-9148s-switch-48-ports-managed-rack-mountable/3520640). If you start with fewer ports licensed the cost goes down dramatically. I’ve also found that, if it’s an option, IB is stupidly cheap from a dollar per unit of bandwidth perspective and makes a great SAN backend. One other, other thought is that FCoE seems very attractive. Not sure if your arrays support that but I believe you get closer performance and behavior to FC with FCoE than with iSCSI and I don’t think there’s a huge cost difference. It’s even more fun if you have multi-fabric FC switches that can do FC and FCoE because you can in theory bridge the two fabrics (e.g. use FCoE on your NSD servers to some 40G switches that support DCB and connect the 40G eth switches to an FC/FCoE switch and then address your 8Gb FC storage and FCoE storage using the same fabric). 

-Aaron

Sent from my iPhone

> On Dec 13, 2018, at 15:54, Buterbaugh, Kevin L <Kevin.Buterbaugh at Vanderbilt.Edu> wrote:
> 
> Hi All,
> 
> Googling “GPFS and iSCSI” doesn’t produce a ton of hits!  But we are interested to know if anyone is actually using GPFS over iSCSI?
> 
> The reason why I’m asking is that we currently use an 8 Gb FC SAN … QLogic SANbox 5800’s, QLogic HBA’s in our NSD servers … but we’re seeing signs that, especially when we start using beefier storage arrays with more disks behind the controllers, the 8 Gb FC could be a bottleneck.
> 
> As many / most of you are already aware, I’m sure, while 16 Gb FC exists, there’s basically only one vendor in that game.  And guess what happens to prices when there’s only one vendor???  We bought our 8 Gb FC switches for approximately $5K apiece.  List price on a <vendor redacted mainly because I don’t even want to honor them by mentioning their name> 16 Gb FC switch - $40K.  Ouch.
> 
> So the idea of being able to use commodity 10 or 40 Gb Ethernet switches and HBA’s is very appealing … both from a cost and a performance perspective (last I checked 40 Gb was more than twice 16 Gb!).  Anybody doing this already?
> 
> As those of you who’ve been on this list for a while and don’t filter out e-mails from me (<grin>) already know, we have a much beefier Infortrend storage array we’ve purchased that I’m currently using to test various metadata configurations (and I will report back results on that when done, I promise).  That array also supports iSCSI, so I actually have our test cluster GPFS filesystem up and running over iSCSI.  It was surprisingly easy to set up.  But any tips, suggestions, warnings, etc. about running GPFS over iSCSI are appreciated!
> 
> Two things that I am already aware of are:  1) use jumbo frames, and 2) run iSCSI over it’s own private network.  Other things I should be aware of?!?
> 
> Thanks all…
> 
>> Kevin Buterbaugh - Senior System Administrator
> Vanderbilt University - Advanced Computing Center for Research and Education
> Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633
> 
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20181215/1bbe9598/attachment.htm>


More information about the gpfsug-discuss mailing list