[gpfsug-discuss] Manager nodes

Simon Thompson (Research Computing - IT Services) S.J.Thompson at bham.ac.uk
Tue Jan 24 16:34:16 GMT 2017


Thanks both. I was thinking of adding 4 (we have a storage cluster over two DC's, so was planning to put two in each and use them as quorum nodes as well plus one floating VM to guarantee only one sitr is quorate in the event of someone cutting a fibre...)

We pretty much start at 128GB ram and go from there, so this sounds fine. Would be good if someone could comment on if token traffic goes via IB or Ethernet, maybe I can save myself a few EDR cards...

Simon
________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Jan-Frode Myklebust [janfrode at tanso.net]
Sent: 24 January 2017 15:51
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Manager nodes

Just some datapoints, in hope that it helps..

I've seen metadata performance improvements by turning down hyperthreading from 8/core to 4/core on Power8. Also it helped distributing the token managers over multiple nodes (6+) instead of fewer.

I would expect this to flow over IP, not IB.




-jf


tir. 24. jan. 2017 kl. 16.18 skrev Buterbaugh, Kevin L <Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu>>:
Hi Simon,

FWIW, we have two servers dedicated to cluster and filesystem management functions (and 8 NSD servers).  I guess you would describe our cluster as small to medium sized … ~700 nodes and a little over 1 PB of storage.

Our two managers have 2 quad core (3 GHz) CPU’s and 64 GB RAM.  They’ve got 10 GbE, but we don’t use IB anywhere.  We have an 8 Gb FC SAN and we do have them connected in to the SAN so that they don’t have to ask the NSD servers to do any I/O for them.

I do collect statistics on all the servers and plunk them into an RRDtool database.  Looking at the last 30 days the load average on the two managers is in the 5-10 range.  Memory utilization seems to be almost entirely dependent on how parameters like the pagepool are set on them.

HTHAL…

Kevin

> On Jan 24, 2017, at 4:00 AM, Simon Thompson (Research Computing - IT Services) <S.J.Thompson at bham.ac.uk<mailto:S.J.Thompson at bham.ac.uk>> wrote:
>
> We are looking at moving manager processes off our NSD nodes and on to
> dedicated quorum/manager nodes.
>
> Are there some broad recommended hardware specs for the function of these
> nodes.
>
> I assume they benefit from having high memory (for some value of high,
> probably a function of number of clients, files, expected open files?, and
> probably completely incalculable, so some empirical evidence may be useful
> here?) (I'm going to ignore the docs that say you should have twice as
> much swap as RAM!)
>
> What about cores, do they benefit from high core counts or high clock
> rates? For example would I benefit more form a high core count, low clock
> speed, or going for higher clock speeds and reducing core count? Or is
> memory bandwidth more important for manager nodes?
>
> Connectivity, does token management run over IB or only over
> Ethernet/admin network? I.e. Should I bother adding IB cards, or just have
> fast Ethernet on them (my clients/NSDs all have IB).
>
> I'm looking for some hints on what I would most benefit in investing in vs
> keeping to budget.
>
> Thanks
>
> Simon
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



More information about the gpfsug-discuss mailing list