[gpfsug-discuss] Used virtualization technologies for GPFS/Spectrum Scale

Jan-Frode Myklebust janfrode at tanso.net
Mon Apr 24 14:14:20 BST 2017


I agree with Luis -- why so many nodes?

"""
So if i would go for 4 NSD server, 6 protocol nodes and 2 tsm backup nodes
and at least 3 test server a total of 11 server is needed.
"""

If this is your whole cluster, why not just 3x P822L/P812L running single
partition per node, hosting a cluster of 3x protocol-nodes that does both
direct FC for disk access, and also run backups on same nodes ? No
complications, full hw performance. Then separate node for test, or
separate partition on same nodes with dedicated adapters.

But back to your original question.  My experience is that LPAR/NPIV works
great, but it's a bit annoying having to also have VIOs. Hope we'll get FC
SR-IOV eventually.. Also LPAR/Dedicated-adapters naturally works fine.

VMWare/RDM can be a challenge in some failure situations. It likes to pause
VMs in APD or PDL situations, which will affect all VMs with access to it
:-o

VMs without direct disk access is trivial.



  -jf


On Mon, Apr 24, 2017 at 2:42 PM, Luis Bolinches <luis.bolinches at fi.ibm.com>
wrote:

> Hi
>
> As tastes vary, I would not partition it so much for the backend. Assuming
> there is little to nothing overhead on the CPU at PHYP level, which it
> depends. On the protocols nodes, due the CTDB keeping locks together across
> all nodes (SMB), you would get more performance on bigger & less number of
> CES nodes than more and smaller.
>
> Certainly a 822 is quite a server if we go back to previous generations
> but I would still keep a simple backend (NSd servers), simple CES (less
> number of nodes the merrier) & then on the client part go as micro
> partitions as you like/can as the effect on the cluster is less relevant in
> the case of resources starvation.
>
> But, it depends on workloads, SLA and money so I say try, establish a
> baseline and it fills the requirements, go for it. If not change till does.
> Have fun
>
>
>
> From:        "service at metamodul.com" <service at metamodul.com>
> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date:        24/04/2017 15:21
> Subject:        Re: [gpfsug-discuss] Used virtualization technologies for
> GPFS/Spectrum Scale
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hi Jonathan
> todays hardware is so powerful that imho it might make sense to split a
> CEC into more "piece". For example the IBM S822L has up to 2x12 cores, 9
> PCI3 slots ( 4×16 lans & 5×8 lan ).
> I think that such a server is a little bit to big  just to be a single NSD
> server.
> Note that i use for each GPFS service a dedicated node.
> So if i would go for 4 NSD server, 6 protocol nodes and 2 tsm backup nodes
> and at least 3 test server a total of 11 server is needed.
> Inhm 4xS822L could handle this and a little bit more quite well.
>
> Of course blade technology could be used or 1U server.
>
> With kind regards
> Hajo
>
> --
> Unix Systems Engineer
> MetaModul GmbH
> +49 177 4393994 <+49%20177%204393994>
>
>
> -------- Ursprüngliche Nachricht --------
> Von: Jonathan Buzzard
> Datum:2017.04.24 13:14 (GMT+01:00)
> An: gpfsug main discussion list
> Betreff: Re: [gpfsug-discuss] Used virtualization technologies for
> GPFS/Spectrum Scale
>
> On Mon, 2017-04-24 at 12:28 +0200, Hans-Joachim Ehlers wrote:
> > @All
> >
> >
> > does anybody uses virtualization technologies for GPFS Server ? If yes
> > what kind and why have you selected your soulution.
> >
> > I think currently about using Linux on Power using 40G SR-IOV for
> > Network and NPIV/Dedidcated FC Adater for storage. As a plus i can
> > also assign only a certain amount of CPUs to GPFS. ( Lower license
> > cost / You pay for what you use)
> >
> >
> > I must admit that i am not familar how "good" KVM/ESX in respect to
> > direct assignment of hardware is. Thus the question to the group
> >
>
> For the most part GPFS is used at scale and in general all the
> components are redundant. As such why you would want to allocate less
> than a whole server into a production GPFS system in somewhat beyond me.
>
> That is you will have a bunch of NSD servers in the system and if one
> crashes, well the other NSD's take over. Similar for protocol nodes, and
> in general the total file system size is going to hundreds of TB
> otherwise why bother with GPFS.
>
> I guess there is currently potential value at sticking the GUI into a
> virtual machine to get redundancy.
>
> On the other hand if you want a test rig, then virtualization works
> wonders. I have put GPFS on a single Linux box, using LV's for the disks
> and mapping them into virtual machines under KVM.
>
> JAB.
>
> --
> Jonathan A. Buzzard                 Email: jonathan (at) buzzard.me.uk
> Fife, United Kingdom.
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______
> ________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> Ellei edellä ole toisin mainittu: / Unless stated otherwise above:
> Oy IBM Finland Ab
> PL 265, 00101 Helsinki, Finland
> Business ID, Y-tunnus: 0195876-3
> Registered in Finland
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170424/5de98677/attachment.htm>


More information about the gpfsug-discuss mailing list