[gpfsug-discuss] Protocol node recommendations

Frank Tower frank.tower at outlook.com
Sat Apr 22 20:22:23 BST 2017


Hi,


Thank for the recommendations.

Now we deal with the situation of:


- take 3 nodes with round robin DNS that handle both protocols

- take 4 nodes, split CIFS and NFS, still use round robin DNS for CIFS and NFS services.


Regarding your recommendations, 256GB memory node could be a plus if we mix both protocols for such case.


Is the spreadsheet publicly available or do we need to ask IBM ?


Thank for your help,

Frank.

________________________________
From: Jan-Frode Myklebust <janfrode at tanso.net>
Sent: Saturday, April 22, 2017 10:50 AM
To: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] Protocol node recommendations

That's a tiny maxFilesToCache...

I would start by implementing the settings from /usr/lpp/mmfs/*/gpfsprotocolldefaul* plus a 64GB pagepool for your protocoll nodes, and leave further tuning to when you see you have issues.

Regarding sizing, we have a spreadsheet somewhere where you can input some workload parameters and get an idea for how many nodes you'll need. Your node config seems fine, but one node seems too few to serve 1000+ users. We support max 3000 SMB connections/node, and I believe the recommendation is 4000 NFS connections/node.


-jf
lør. 22. apr. 2017 kl. 08.34 skrev Frank Tower <frank.tower at outlook.com<mailto:frank.tower at outlook.com>>:
Hi,

We have here around 2PB GPFS (4.2.2) accessed through an HPC cluster with GPFS client on each node.

We will have to open GPFS to all our users over CIFS and kerberized NFS with ACL support for both protocol for around +1000 users

All users have different use case and needs:
- some will do random I/O through a large set of opened files (~5k files)
- some will do large write with 500GB-1TB files
- other will arrange sequential I/O with ~10k opened files

NFS and CIFS will share the same server, so I through to use SSD drive, at least 128GB memory with 2 sockets.

Regarding tuning parameters, I thought at:

maxFilesToCache 10000
syncIntervalStrict yes
workerThreads (8*core)
prefetchPct 40 (for now and update if needed)

I read the wiki 'Sizing Guidance for Protocol Node', but I was wondering if someone could share his experience/best practice regarding hardware sizing and/or tuning parameters.

Thank by advance,
Frank
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170422/2e74b8d0/attachment.htm>


More information about the gpfsug-discuss mailing list