[gpfsug-discuss] LROC

Stephen Ulmer ulmer at ulmer.org
Wed Dec 21 15:17:27 GMT 2016


Sven,

I’ve read this several times, and it will help me to re-state it. Please tell me if this is not what you meant:

You often see even common operations (like ls) blow out the StatCache, and things are inefficient when the StatCache is in use but constantly overrun. Because of this, you normally recommend disabling the StatCache with maxStatCache=0, and instead spend the memory normally used for StatCache on the FileCache.

In the case of LROC, there *must* be a StatCache entry for every file that is held in the LROC. In this case, we want to set maxStatCache at least as large as the number of files whose data or metadata we’d like to be in the LROC.

Close?

-- 
Stephen



> On Dec 21, 2016, at 6:57 AM, Sven Oehme <oehmes at gmail.com <mailto:oehmes at gmail.com>> wrote:
> 
> its not the only place used, but we see that most calls for attributes even from simplest ls requests are beyond what the StatCache provides, therefore my advice is always to disable maxStatCache by setting it to 0 and raise the maxFilestoCache limit to a higher than default as the memory is better spent there than wasted on StatCache, there is also waste by moving back and forth between StatCache and FileCache if you constantly need more that what the FileCache provides, so raising it and reduce StatCache to zero eliminates this overhead (even its just a few cpu cycles). 
> on LROC its essential as a LROC device can only keep data or Metadata for files it wants to hold any references if it has a StatCache object available, this means if your StatCache is set to 10000 and lets say you have 100000 files you want to cache in LROC this would never work as we throw the oldest out of LROC as soon as we try to cache nr 10001 as we have to reuse a StatCache Object to keep the reference to the data or metadata block stored in LROC . 
> 
> Sven
> 
> On Wed, Dec 21, 2016 at 12:48 PM Peter Childs <p.childs at qmul.ac.uk <mailto:p.childs at qmul.ac.uk>> wrote:
> So your saying maxStatCache should be raised on LROC enabled nodes only as its the only place under Linux its used and should be set low on non-LROC enabled nodes.
> 
> Fine just good to know, nice and easy now with nodeclasses....
> 
> Peter Childs
> 
> 
> ________________________________________
> From: gpfsug-discuss-bounces at spectrumscale.org <mailto:gpfsug-discuss-bounces at spectrumscale.org> <gpfsug-discuss-bounces at spectrumscale.org <mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of Sven Oehme <oehmes at gmail.com <mailto:oehmes at gmail.com>>
> Sent: Wednesday, December 21, 2016 11:37:46 AM
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] LROC
> 
> StatCache is not useful on Linux, that hasn't changed if you don't use LROC on the same node. LROC uses the compact object (StatCache) to store its pointer to the full file Object which is stored on the LROC device. so on a call for attributes that are not in the StatCache the object gets recalled from LROC and converted back into a full File Object, which is why you still need to have a reasonable maxFiles setting even you use LROC as you otherwise constantly move file infos in and out of LROC and put the device under heavy load.
> 
> sven
> 
> 
> 
> On Wed, Dec 21, 2016 at 12:29 PM Peter Childs <p.childs at qmul.ac.uk <mailto:p.childs at qmul.ac.uk><mailto:p.childs at qmul.ac.uk <mailto:p.childs at qmul.ac.uk>>> wrote:
> My understanding was the maxStatCache was only used on AIX and should be set low on Linux, as raising it did't help and wasted resources. Are we saying that LROC now uses it and setting it low if you raise maxFilesToCache under linux is no longer the advice.
> 
> 
> Peter Childs
> 
> 
> ________________________________________
> From: gpfsug-discuss-bounces at spectrumscale.org <mailto:gpfsug-discuss-bounces at spectrumscale.org><mailto:gpfsug-discuss-bounces at spectrumscale.org <mailto:gpfsug-discuss-bounces at spectrumscale.org>> <gpfsug-discuss-bounces at spectrumscale.org <mailto:gpfsug-discuss-bounces at spectrumscale.org><mailto:gpfsug-discuss-bounces at spectrumscale.org <mailto:gpfsug-discuss-bounces at spectrumscale.org>>> on behalf of Sven Oehme <oehmes at gmail.com <mailto:oehmes at gmail.com><mailto:oehmes at gmail.com <mailto:oehmes at gmail.com>>>
> Sent: Wednesday, December 21, 2016 9:23:16 AM
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] LROC
> 
> Lroc only needs a StatCache object as it 'compacts' a full open File object (maxFilesToCache) to a StatCache Object when it moves the content to the LROC device.
> therefore the only thing you really need to increase is maxStatCache on the LROC node, but you still need maxFiles Objects, so leave that untouched and just increas maxStat
> 
> Olaf's comment is important you need to make sure your manager nodes have enough memory to hold tokens for all the objects you want to cache, but if the memory is there and you have enough its well worth spend a lot of memory on it and bump maxStatCache to a high number. i have tested maxStatCache up to 16 million at some point per node, but if nodes with this large amount of inodes crash or you try to shut them down you have some delays , therefore i suggest you stay within a 1 or 2  million per node and see how well it does and also if you get a significant gain.
> i did help Bob to setup some monitoring for it so he can actually get comparable stats, i suggest you setup Zimon and enable the Lroc sensors to have real stats too , so you can see what benefits you get.
> 
> Sven
> 
> On Tue, Dec 20, 2016 at 8:13 PM Matt Weil <mweil at wustl.edu <mailto:mweil at wustl.edu><mailto:mweil at wustl.edu <mailto:mweil at wustl.edu>><mailto:mweil at wustl.edu <mailto:mweil at wustl.edu><mailto:mweil at wustl.edu <mailto:mweil at wustl.edu>>>> wrote:
> 
> as many as possible and both
> 
> have maxFilesToCache 128000
> 
> and maxStatCache 40000
> 
> do these effect what sits on the LROC as well?  Are those to small? 1million seemed excessive.
> 
> On 12/20/16 11:03 AM, Sven Oehme wrote:
> how much files do you want to cache ?
> and do you only want to cache metadata or also data associated to the files ?
> 
> sven
> 
> 
> 
> On Tue, Dec 20, 2016 at 5:35 PM Matt Weil <mweil at wustl.edu <mailto:mweil at wustl.edu><mailto:mweil at wustl.edu <mailto:mweil at wustl.edu>><mailto:mweil at wustl.edu <mailto:mweil at wustl.edu><mailto:mweil at wustl.edu <mailto:mweil at wustl.edu>>>> wrote:
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Flash%20Storage <https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Flash%20Storage><https://www.ibm.com/developerworks/community/wikis/home?lang=en#%21/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Flash%20Storage <https://www.ibm.com/developerworks/community/wikis/home?lang=en#%21/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Flash%20Storage>>
> 
> Hello all,
> 
> Are there any tuning recommendations to get these to cache more metadata?
> 
> Thanks
> 
> Matt
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/><http://spectrumscale.org <http://spectrumscale.org/>><http://spectrumscale.org <http://spectrumscale.org/>>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
> 
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/><http://spectrumscale.org <http://spectrumscale.org/>><http://spectrumscale.org <http://spectrumscale.org/>>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/><http://spectrumscale.org <http://spectrumscale.org/>><http://spectrumscale.org <http://spectrumscale.org/>>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/><http://spectrumscale.org <http://spectrumscale.org/>>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161221/4451500a/attachment.htm>


More information about the gpfsug-discuss mailing list