[gpfsug-discuss] Memory accounting for processes writing to GPFS

Dorigo Alvise (PSI) alvise.dorigo at psi.ch
Thu Mar 7 10:15:16 GMT 2019


Thanks to all for clarification.

   A

________________________________
From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Tomer Perry [TOMP at il.ibm.com]
Sent: Wednesday, March 06, 2019 2:14 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Memory accounting for processes writing to GPFS

It might be the case that AsynchronousFileChannelis actually doing mmap access to the files. Thus, the memory management will be completely different with GPFS in compare to local fs.

Regards,

Tomer Perry
Scalable I/O Development (Spectrum Scale)
email: tomp at il.ibm.com
1 Azrieli Center, Tel Aviv 67021, Israel
Global Tel:    +1 720 3422758
Israel Tel:      +972 3 9188625
Mobile:         +972 52 2554625




From:        Jim Doherty <jjdoherty at yahoo.com>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:        06/03/2019 06:59
Subject:        Re: [gpfsug-discuss] Memory accounting for processes writing to GPFS
Sent by:        gpfsug-discuss-bounces at spectrumscale.org
________________________________



For any process with a large number of threads the VMM size has become an imaginary number ever since the glibc change to allocate a heap per thread.
I look to /proc/$pid/status to find the memory used by a proc  RSS + Swap + kernel page tables.

Jim

On Wednesday, March 6, 2019, 4:25:48 AM EST, Dorigo Alvise (PSI) <alvise.dorigo at psi.ch> wrote:


Hello to everyone,
Here a PSI we're observing something that in principle seems strange (at least to me).
We run a Java application writing into disk by mean of a standard AsynchronousFileChannel, whose I do not the details.
There are two instances of this application: one runs on a node writing on a local drive, the other one runs writing on a GPFS mounted filesystem (this node is part of the cluster, no remote-mounting).

What we do see is that in the former the application has a lower sum VIRT+RES memory and the OS shows a really big cache usage; in the latter, OS's cache is negligible while VIRT+RES is very (even too) high (with VIRT very high).

So I wonder what is the difference... Writing into a GPFS mounted filesystem, as far as I understand, implies "talking" to the local mmfsd daemon which fills up its own pagepool... and then the system will asynchronously handle these pages to be written on real pdisk. But why the Linux kernel accounts so much memory to the process itself ? And why this large amount of memory is much more VIRT than RES ?

thanks in advance,

   Alvise
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190307/51f28153/attachment.htm>


More information about the gpfsug-discuss mailing list