[gpfsug-discuss] LROC 100% utilized in terms of IOs

Sven Oehme oehmes at gmail.com
Wed Jan 25 21:29:50 GMT 2017


have you tried to just leave lrocInodes and lrocDirectories on and turn
data off ?
also did you increase maxstatcache so LROC actually has some compact
objects to use ?
if you send value for maxfilestocache,maxfilestocache,workerthreads and
available memory of the node i can provide a start point.

On Wed, Jan 25, 2017 at 10:20 PM Matt Weil <mweil at wustl.edu> wrote:

>
>
> On 1/25/17 3:00 PM, Sven Oehme wrote:
>
> Matt,
>
> the assumption was that the remote devices are slower than LROC. there is
> some attempts in the code to not schedule more than a maximum numbers of
> outstanding i/os to the LROC device, but this doesn't help in all cases and
> is depending on what kernel level parameters for the device are set. the
> best way is to reduce the max size of data to be cached into lroc.
>
> I just turned LROC file caching completely off.  most if not all of the IO
> is metadata.  Which is what I wanted to keep fast.  It is amazing once you
> drop the latency the IO's go up way more than they ever where before.  I
> guess we will need another nvme.
>
>
> sven
>
>
> On Wed, Jan 25, 2017 at 9:50 PM Matt Weil <mweil at wustl.edu> wrote:
>
> Hello all,
>
> We are having an issue where the LROC on a CES node gets overrun 100%
> utilized.  Processes  then start to backup waiting for the LROC to
> return data.  Any way to have the GPFS client go direct if LROC gets to
> busy?
>
> Thanks
> Matt
>
> ________________________________
> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If you
> are not the intended recipient, be advised that any unauthorized use,
> disclosure, copying or the taking of any action in reliance on the contents
> of this information is strictly prohibited. If you have received this email
> in error, please immediately notify the sender via telephone or return mail.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> ------------------------------
>
> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If you
> are not the intended recipient, be advised that any unauthorized use,
> disclosure, copying or the taking of any action in reliance on the contents
> of this information is strictly prohibited. If you have received this email
> in error, please immediately notify the sender via telephone or return mail.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170125/6c783115/attachment.htm>


More information about the gpfsug-discuss mailing list