[gpfsug-discuss] Used virtualization technologies for GPFS/Spectrum Scale

Luis Bolinches luis.bolinches at fi.ibm.com
Mon Apr 24 13:42:51 BST 2017


Hi

As tastes vary, I would not partition it so much for the backend. Assuming 
there is little to nothing overhead on the CPU at PHYP level, which it 
depends. On the protocols nodes, due the CTDB keeping locks together 
across all nodes (SMB), you would get more performance on bigger & less 
number of CES nodes than more and smaller.

Certainly a 822 is quite a server if we go back to previous generations 
but I would still keep a simple backend (NSd servers), simple CES (less 
number of nodes the merrier) & then on the client part go as micro 
partitions as you like/can as the effect on the cluster is less relevant 
in the case of resources starvation.

But, it depends on workloads, SLA and money so I say try, establish a 
baseline and it fills the requirements, go for it. If not change till 
does. Have fun



From:   "service at metamodul.com" <service at metamodul.com>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   24/04/2017 15:21
Subject:        Re: [gpfsug-discuss] Used virtualization technologies for 
GPFS/Spectrum Scale
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi Jonathan
todays hardware is so powerful that imho it might make sense to split a 
CEC into more "piece". For example the IBM S822L has up to 2x12 cores, 9 
PCI3 slots ( 4×16 lans & 5×8 lan ).
I think that such a server is a little bit to big  just to be a single NSD 
server.
Note that i use for each GPFS service a dedicated node.
So if i would go for 4 NSD server, 6 protocol nodes and 2 tsm backup nodes 
and at least 3 test server a total of 11 server is needed.
Inhm 4xS822L could handle this and a little bit more quite well.

Of course blade technology could be used or 1U server.

With kind regards
Hajo

-- 
Unix Systems Engineer
MetaModul GmbH
+49 177 4393994


-------- Ursprüngliche Nachricht --------
Von: Jonathan Buzzard 
Datum:2017.04.24 13:14 (GMT+01:00) 
An: gpfsug main discussion list 
Betreff: Re: [gpfsug-discuss] Used virtualization technologies for 
GPFS/Spectrum Scale 

On Mon, 2017-04-24 at 12:28 +0200, Hans-Joachim Ehlers wrote:
> @All
> 
> 
> does anybody uses virtualization technologies for GPFS Server ? If yes
> what kind and why have you selected your soulution.
> 
> I think currently about using Linux on Power using 40G SR-IOV for
> Network and NPIV/Dedidcated FC Adater for storage. As a plus i can
> also assign only a certain amount of CPUs to GPFS. ( Lower license
> cost / You pay for what you use)
> 
> 
> I must admit that i am not familar how "good" KVM/ESX in respect to
> direct assignment of hardware is. Thus the question to the group
> 

For the most part GPFS is used at scale and in general all the
components are redundant. As such why you would want to allocate less
than a whole server into a production GPFS system in somewhat beyond me.

That is you will have a bunch of NSD servers in the system and if one
crashes, well the other NSD's take over. Similar for protocol nodes, and
in general the total file system size is going to hundreds of TB
otherwise why bother with GPFS.

I guess there is currently potential value at sticking the GUI into a
virtual machine to get redundancy.

On the other hand if you want a test rig, then virtualization works
wonders. I have put GPFS on a single Linux box, using LV's for the disks
and mapping them into virtual machines under KVM.

JAB.

-- 
Jonathan A. Buzzard                 Email: jonathan (at) buzzard.me.uk
Fife, United Kingdom.

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




Ellei edellä ole toisin mainittu: / Unless stated otherwise above:
Oy IBM Finland Ab
PL 265, 00101 Helsinki, Finland
Business ID, Y-tunnus: 0195876-3 
Registered in Finland
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170424/414e0de0/attachment.htm>


More information about the gpfsug-discuss mailing list