[gpfsug-discuss] mmap performance against Spectrum Scale

Sven Oehme oehmes at gmail.com
Fri Jan 12 20:57:24 GMT 2018


is this primary read or write ?

On Fri, Jan 12, 2018, 12:51 PM Ray Coetzee <coetzee.ray at gmail.com> wrote:

> Hey Sven, the latest clients I've tested with is 4.2.3-6 on RHEL7.2.
> (Without the meltdown patch)
>
> Hey Bryan, I remember that quote from Yuri, that's why I hoped some
> "magic" may have been done since then.
>
> Other attempts to improve performance I've tried include:
>
>    - Using LROC to have a larger chance of a cache hit (Unfortunately the
>    entire dataset is multiple TB)
>    - Built an NVMe based scratch filesystem (18x 1.8TB NVMe) just for
>    this purpose (Job runs halved in duration but nowhere near what NFS can
>    give)
>    - Made changes to prefecthPct, PrefetchAgressiveness, DisableDIO, and
>    some others with little improvement.
>
> For those interested, as a performance comparison. The same job when run
> on an aging Isilon takes 1m30s, while GPFS will take ~38min on the all NVMe
> scratch filesystem and over 60min on spindle based filesystem.
>
> Kind regards
>
> Ray Coetzee
> Email: coetzee.ray at gmail.com
>
>
> On Fri, Jan 12, 2018 at 4:12 PM, Bryan Banister <bbanister at jumptrading.com
> > wrote:
>
>> You could put all of your data onto SSDs in a RAID1 configuration so that
>> you don’t have insane read-modify-write penalties on writes (RAID1) and
>> avoid horrible seek thrashing that spinning rust requires (SSD random
>> access medium) for your 4K I/O operations.
>>
>>
>>
>> One of my favorite Yuri quotes, “The mmap code is like asbestos… best not
>> to touch it”.  He gave many reasons why mmap operations on a distributed
>> file system is incredibly hard and not recommended.
>>
>> -Bryan
>>
>>
>>
>> *From:* gpfsug-discuss-bounces at spectrumscale.org [mailto:
>> gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Sven Oehme
>> *Sent:* Friday, January 12, 2018 8:45 AM
>> *To:* coetzee.ray at gmail.com; gpfsug main discussion list <
>> gpfsug-discuss at spectrumscale.org>
>> *Subject:* Re: [gpfsug-discuss] mmap performance against Spectrum Scale
>>
>>
>>
>> *Note: External Email*
>> ------------------------------
>>
>> what version of Scale are you using right now ?
>>
>>
>>
>> On Fri, Jan 12, 2018 at 2:29 AM Ray Coetzee <coetzee.ray at gmail.com>
>> wrote:
>>
>> I'd like to ask the group of their experiences in improving the
>> performance of applications that use mmap calls against files on Spectrum
>> Scale.
>>
>>
>>
>> Besides using an NFS export from CES instead of a native GPFS mount, or
>> precaching the dataset into the pagepool, what other approaches are there
>> to offset the performance hit of the 4K IO size?
>>
>>
>>
>> Kind regards
>>
>> Ray Coetzee
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
>> ------------------------------
>>
>> Note: This email is for the confidential use of the named addressee(s)
>> only and may contain proprietary, confidential or privileged information.
>> If you are not the intended recipient, you are hereby notified that any
>> review, dissemination or copying of this email is strictly prohibited, and
>> to please notify the sender immediately and destroy this email and any
>> attachments. Email transmission cannot be guaranteed to be secure or
>> error-free. The Company, therefore, does not make any guarantees as to the
>> completeness or accuracy of this email or any attachments. This email is
>> for informational purposes only and does not constitute a recommendation,
>> offer, request or solicitation of any kind to buy, sell, subscribe, redeem
>> or perform any type of transaction of a financial product.
>>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180112/77dcc6ab/attachment.htm>


More information about the gpfsug-discuss mailing list