[gpfsug-discuss] mmap I/O on GPFS

Stuart Barkley stuartb at 4gh.net
Wed May 15 17:16:59 BST 2013


Has anyone done mmap() I/O with GPFS?

We are seeing quite variable performance issues with one of our
applications (our first to use mmap I/O).  We are using files larger
than physical memory (32G-128G files on 24G x3650 M2 nodes).

With a simple test case doing a sum of all of the bytes in a file we
are not always seeing expected performance.  When the problems occur
the test program is spending all of its time in system wait (1 core of
otherwise idle 8 core system).

Increasing pagepool to 512M (from default) and/or increasing
seqDiscardThreshhold to 1500G seemed to help with forward sequential
reading.  I'm not sure which change (or some other) is changing
performance and think there is something else going on.

Now our application programmer is testing by summing the bytes
starting from the middle of the file and working out (alternating up
and down) which may break the sequential read detection logic in GPFS.
This seems to have returned to a very slow reading process.

So far we are only testing with file reads but mmap() based file
writes will be of interest soon.

We are using GPFS 3.5.0-7 on this cluster with CentOS 6.4.  The GPFS
Block size is 262144.

Per the FAQ/Release Notes we have set transparent_hugepage=never on
our kernel boot options.

I don't think this mmap problem is network related, but I'm looking at
another problem which might be.  The network is a mixed 10G/1G network
using BNT switches.  MTU is still 1500.

Earlier we compared against a NetApp server and saw much better
performance with the sequential read case.

Thanks,
Stuart Barkley
-- 
I've never been lost; I was once bewildered for three days, but never lost!
                                        --  Daniel Boone



More information about the gpfsug-discuss mailing list