[gpfsug-discuss] GPFS 4.1.1 without QoS for mmrestripefs?

Daniel Vogel Daniel.Vogel at abcsystems.ch
Fri Jul 10 15:19:11 BST 2015


For „1“ we use the quorum node to do “start disk” or “restripe file system” (quorum node without disks).
For “2” we use kernel NFS with cNFS

I used the command “cnfsNFSDprocs 64” to set the NFS threads. Is this correct?

gpfs01:~ # cat /proc/fs/nfsd/threads
64

I will verify the settings in our lab, will use the following configuration:
mmchconfig worker1Threads=128
mmchconfig prefetchThreads=128
mmchconfig nsdMaxWorkerThreads=128
mmchconfig cnfsNFSDprocs=256

daniel



Von: gpfsug-discuss-bounces at gpfsug.org<mailto:gpfsug-discuss-bounces at gpfsug.org> [mailto:gpfsug-discuss-bounces at gpfsug.org]<mailto:[mailto:gpfsug-discuss-bounces at gpfsug.org]> Im Auftrag von Sven Oehme
Gesendet: Samstag, 4. Juli 2015 00:49
An: gpfsug main discussion list
Betreff: Re: [gpfsug-discuss] GPFS 4.1.1 without QoS for mmrestripefs?


this triggers a few questions

1. have you tried running it only on a node that doesn't serve NFS data ?
2. what NFS stack are you using ? is this the kernel NFS Server as part of linux means you use cNFS ?

if the answer to 2 is yes, have you adjusted the nfsd threads in /etc/sysconfig/nfs ? the default is only 8 and if you run with the default you have a very low number of threads from the outside competing with a larger number of threads doing restripe, increasing the nfsd threads could help. you could also reduce the number of internal restripe threads to try out if that helps mitigating the impact.

to try an extreme low value set the following :

mmchconfig pitWorkerThreadsPerNode=1 -i

and retry the restripe again, to reset it back to default run

mmchconfig pitWorkerThreadsPerNode=DEFAULT -i

sven

------------------------------------------
Sven Oehme
Scalable Storage Research
email: oehmes at us.ibm.com<mailto:oehmes at us.ibm.com>
Phone: +1 (408) 824-8904
IBM Almaden Research Lab
------------------------------------------

[Beschreibung: Inactive hide details for Daniel Vogel ---07/02/2015 12:12:46 AM---Sven, Yes I agree, but “using –N” to reduce the load help]Daniel Vogel ---07/02/2015 12:12:46 AM---Sven, Yes I agree, but “using –N” to reduce the load helps not really. If I use NFS, for example, as

From: Daniel Vogel <Daniel.Vogel at abcsystems.ch<mailto:Daniel.Vogel at abcsystems.ch>>
To: "'gpfsug main discussion list'" <gpfsug-discuss at gpfsug.org<mailto:gpfsug-discuss at gpfsug.org>>
Date: 07/02/2015 12:12 AM
Subject: Re: [gpfsug-discuss] GPFS 4.1.1 without QoS for mmrestripefs?
Sent by: gpfsug-discuss-bounces at gpfsug.org<mailto:gpfsug-discuss-bounces at gpfsug.org>

________________________________



Sven,

Yes I agree, but “using –N” to reduce the load helps not really. If I use NFS, for example, as a ESX data store, ESX I/O latency for NFS goes very high, the VM’s hangs. By the way I use SSD PCIe cards, perfect “mirror speed” but slow I/O on NFS.
The GPFS cluster concept I use are different than GSS or traditional FC (shared storage). I use shared nothing with IB (no FPO), many GPFS nodes with NSD’s. I know the need to resync the FS with mmchdisk / mmrestripe will happen more often. The only one feature will help is QoS for the GPFS admin jobs. I hope we are not fare away from this.

Thanks,
Daniel


Von: gpfsug-discuss-bounces at gpfsug.org<mailto:gpfsug-discuss-bounces at gpfsug.org> [mailto:gpfsug-discuss-bounces at gpfsug.org] Im Auftrag von Sven Oehme
Gesendet: Mittwoch, 1. Juli 2015 16:21
An: gpfsug main discussion list
Betreff: Re: [gpfsug-discuss] GPFS 4.1.1 without QoS for mmrestripefs?

Daniel,

as you know, we can't discuss future / confidential items on a mailing list.
what i presented as an outlook to future releases hasn't changed from a technical standpoint, we just can't share a release date until we announce it official.
there are multiple ways today to limit the impact on restripe and other tasks, the best way to do this is to run the task ( using -N) on a node (or very small number of nodes) that has no performance critical role. while this is not perfect, it should limit the impact significantly. .

sven

------------------------------------------
Sven Oehme
Scalable Storage Research
email: oehmes at us.ibm.com<mailto:oehmes at us.ibm.com>
Phone: +1 (408) 824-8904
IBM Almaden Research Lab
------------------------------------------

[Beschreibung: Inactive hide details for Daniel Vogel ---07/01/2015 03:29:11 AM---Hi Years ago, IBM made some plan to do a implementation "QoS]Daniel Vogel ---07/01/2015 03:29:11 AM---Hi Years ago, IBM made some plan to do a implementation "QoS for mmrestripefs, mmdeldisk...". If a "

From: Daniel Vogel <Daniel.Vogel at abcsystems.ch<mailto:Daniel.Vogel at abcsystems.ch>>
To: "'gpfsug-discuss at gpfsug.org'" <gpfsug-discuss at gpfsug.org<mailto:gpfsug-discuss at gpfsug.org>>
Date: 07/01/2015 03:29 AM
Subject: [gpfsug-discuss] GPFS 4.1.1 without QoS for mmrestripefs?
Sent by: gpfsug-discuss-bounces at gpfsug.org<mailto:gpfsug-discuss-bounces at gpfsug.org>

________________________________




Hi

Years ago, IBM made some plan to do a implementation “QoS for mmrestripefs, mmdeldisk…”. If a “mmfsrestripe” is running, very poor performance for NFS access.
I opened a PMR to ask for QoS in version 4.1.1 (Spectrum Scale).

PMR 61309,113,848:
I discussed the question of QOS with the development team. These
command changes that were noticed are not meant to be used as GA code
which is why they are not documented. I cannot provide any further
information from the support perspective.


Anybody knows about QoS? The last hope was at “GPFS Workshop Stuttgart März 2015” with Sven Oehme as speaker.

Daniel Vogel
IT Consultant

ABC SYSTEMS AG
Hauptsitz Zürich
Rütistrasse 28
CH - 8952 Schlieren
T +41 43 433 6 433
D +41 43 433 6 467
http://www.abcsystems.ch<http://www.abcsystems.ch/>

ABC - Always Better Concepts. Approved By Customers since 1981.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20150710/dccb1bbc/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.gif
Type: image/gif
Size: 105 bytes
Desc: image001.gif
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20150710/dccb1bbc/attachment.gif>


More information about the gpfsug-discuss mailing list