[gpfsug-discuss] gpfs performance monitoring

service at metamodul.com service at metamodul.com
Thu Sep 4 13:04:21 BST 2014


... , any "ls" could take ages.

>Check if you large directories either with many files or simply large.

>>    it happens that the files are very large ( over 100G), but there usually
>> ther are no many files.

>>> Please check that the directory size is not large.
In a worst case you have a directory with 10MB in size but it contains only one
file. In any way GPFS must fetch the whole directory structure might causing
unnecassery IO. Thus my request that you check your directory sizes.


>Verify that your cache settings on the clients are large enough ( maxStatCache
>, maxFilesToCache , sharedMemLimit )
>>will look at them, but i'm not sure that the best number will be on the
>>client. Obviously i cannot use all the memory of the client because those
>>blients are meant to run jobs....

Use lsof on the client to determine the amount of open filese. mmdiag --stats  (
>From my memory ) shows a little bit about the cache usage. maxStatCache does not
use that much memory.


> Verify that you have dedicated metadata luns ( metadataOnly )
>> Yes, we have dedicate vdisks for metadata, but they are in the same
>> declustered arrays/recoverygroups, so they whare the same spindles

Thats imho not a good approach. Metadata operation are small and random, data io
is large and streaming.

Just think you have a highway full of large trucks and you try to get with a
high speed bike to your destination. You will be blocked.
The same problem you have at your destiation. If many large trucks would like to
get their stuff off there is no time for somebody with a small parcel.

Thats the same reason why you should not access tape storage and disk storage
via the same FC adapter. ( Streaming IO version v. random/small IO )

So even without your current problem and motivation for measureing i would
strongly suggest to have at least dediacted SSD for metadata and if possible
even dedicated NSD server for the metadata.
Meaning have a dedicated path for your data and a dedicated path for your
metadata.

All from a users point of view
Hajo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20140904/a74a36fb/attachment.htm>


More information about the gpfsug-discuss mailing list