[gpfsug-discuss] Confusing I/O Behavior

Sven Oehme oehmes at gmail.com
Wed May 2 13:34:56 BST 2018


a few more weeks and we have a better answer than dump pgalloc ;-)


On Wed, May 2, 2018 at 6:07 AM Peter Smith <peter.smith at framestore.com>
wrote:

> "how do I see how much of the pagepool is in use and by what? I've looked
> at mmfsadm dump and mmdiag --memory and neither has provided me the
> information I'm looking for (or at least not in a format I understand)"
>
> +1. Pointers appreciated! :-)
>
> On 10 April 2018 at 17:22, Aaron Knister <aaron.s.knister at nasa.gov> wrote:
>
>> I wonder if this is an artifact of pagepool exhaustion which makes me ask
>> the question-- how do I see how much of the pagepool is in use and by what?
>> I've looked at mmfsadm dump and mmdiag --memory and neither has provided me
>> the information I'm looking for (or at least not in a format I understand).
>>
>> -Aaron
>>
>> On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE
>> CORP] wrote:
>>
>>> I hate admitting this but I’ve found something that’s got me stumped.
>>>
>>> We have a user running an MPI job on the system. Each rank opens up
>>> several output files to which it writes ASCII debug information. The net
>>> result across several hundred ranks is an absolute smattering of teeny tiny
>>> I/o requests to te underlying disks which they don’t appreciate.
>>> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I
>>> don’t understand is why these write requests aren’t getting batched up into
>>> larger write requests to the underlying disks.
>>>
>>> If I do something like “df if=/dev/zero of=foo bs=8k” on a node I see
>>> that the nasty unaligned 8k io requests are batched up into nice 1M I/o
>>> requests before they hit the NSD.
>>>
>>> As best I can tell the application isn’t doing any fsync’s and isn’t
>>> doing direct io to these files.
>>>
>>> Can anyone explain why seemingly very similar io workloads appear to
>>> result in well formed NSD I/O in one case and awful I/o in another?
>>>
>>> Thanks!
>>>
>>> -Stumped
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>>
>>>
>> --
>> Aaron Knister
>> NASA Center for Climate Simulation (Code 606.2)
>> Goddard Space Flight Center
>> (301) 286-2776
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>
>
>
> --
> [image: Framestore] Peter Smith · Senior Systems Engineer
> London · New York · Los Angeles · Chicago · Montréal
> T  +44 (0)20 7208 2600 · M  +44 (0)7816 123009
> <+44%20%280%297816%20123009>
> 28 Chancery Lane, London WC2A 1LB
> <https://www.google.co.uk/maps/place/19-23+Wells+Street,+London+W1T+3PQ>
> Twitter <https://twitter.com/framestore> · Facebook
> <https://www.facebook.com/framestore> · framestore.com
> <http://www.framestore.com>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180502/10d9bd54/attachment.htm>


More information about the gpfsug-discuss mailing list