[gpfsug-discuss] Two new whitepapers published

Laurence Horrocks-Barlow laurence at qsplace.co.uk
Mon Oct 24 08:10:12 BST 2016


Hi Peter,

I've always been under the impression that this is a ball park figure that changes some of the on disk data structures to help parallel access to the filesystem I try to estimate the number of clients and then add ~10% (depending on the cluster size ofc). 

I have tested the default 32 vs 200 on a previous cluster however didn't find a difference in performance when testing up to 50 clients concurrently with IOR random and sequential. Maybe the difference is more subtle that just throughput?

What I've always found strange is that the value can be changed on the filesystem however I don't believe this change effects an already created filesystem.

-- Lauz

On 21 October 2016 21:35:15 BST, Peter Childs <p.childs at qmul.ac.uk> wrote:
>
>Reading through them and we'll worth read.
>
>How important is setting the correct cluster size at file system
>creation time? Ie with "mmchfs -n 256" ie how much band of error can
>you get away with?
>
>We have a cluster of 240 nodes that was setup with the default 32
>setting, it's now about to grow to ~300 nodes. Is this likly to causing
>as an issue, which is difficult to fix. The manual says it can be
>adjusted but this white paper suggests not.
>
>Fortunately were also migrating our storage to new hardware, so have a
>good opportunity to get the setting right, this time around.
>
>Has anyone got any stats on the benefits of getting it "right" vs
>getting it wrong.
>
>
>
>
>Peter Childs
>Research Storage
>ITS Research and Teaching Support
>Queen Mary, University of London
>
>
>---- Andreas Landhäußer wrote ----
>
>On Fri, 21 Oct 2016, Laurence Horrocks-Barlow <laurence at qsplace.co.uk>
>wrote:
>
>Found it!
>
>thanks Yuri, for the two very interesting papers about the Metadata and
>replication.
>
>We always used the rule of thumb 5% for metadata, since we are having
>separate devices for metadata, and tiny fast disks aren't available
>anymore, we are getting a bunch of larger fast disks. We never
>experienced
>a problem with the (more or less) amount of metadata ...
>
>         Andreas
>
>
>> Right down the bottom of the page under attachments.
>>
>> -- Lauz
>>
>> On 21 October 2016 07:43:40 BST, "Andreas Landhäußer"
><alandhae at gmx.de> wrote:
>>>
>>> Hi Yuri,
>>>
>>> Arrg, can't find them, page has been last updated on Aug, 18 by
>>> JohnTOlson, maybe its internal and not open for the public?
>>>
>>> Ciao
>>>
>>>       Andreas
>>>
>>> On Fri, 21 Oct 2016, Yuri L Volobuev <volobuev at us.ibm.com> wrote:
>>>
>>>>
>>>>
>>>> Esteemed GPFS and Spectrum Scale users,
>>>>
>>>> For your reading enjoyment, two new whitepapers have been posted to
>>> the
>>>> Spectrum Scale Wiki:
>>>>
>>>> Spectrum Scale: Replication in GPFS
>>>> Spectrum Scale: GPFS Metadata
>>>>
>>>> The URL for the parent page is
>>>>
>>>
>https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20
>>>> (GPFS)/page/White%20Papers%20%26%20Media
>>>>
>>>> The two .pdf documents are accessible through the Attachment
>section
>>> at the
>>>> bottom of the page.  Unfortunately, dW "spam prevention engine"
>does
>>> a very
>>>> good job preventing me from "spamming" the page to actually add
>>> links.
>>>>
>>>> Best regards,
>>>>
>>>> Yuri
>>>>
>>>
>>> --
>>> Andreas Landhäußer                           +49 151 12133027
>(mobile)
>>> alandhae at gmx.de
>>>
>>>
>------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
>
>--
>Andreas Landhäußer                              +49 151 12133027
>(mobile)
>alandhae at gmx.de
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>gpfsug-discuss mailing list
>gpfsug-discuss at spectrumscale.org
>http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161024/b7869495/attachment.htm>


More information about the gpfsug-discuss mailing list