[gpfsug-discuss] Question about inodes incrise

Sven Oehme oehmes at gmail.com
Wed Mar 6 15:30:31 GMT 2019


While Fred is right, in most cases you shouldn’t see this, under heavy burst create workloads before 5.0.2 you can even trigger out of space errors even you have plenty of space in the filesystem (very hard to reproduce so unlikely to hit for a normal enduser). to address the issues there have been significant enhancements in this area in 5.0.2. prior the changes expansions under heavy load many times happened in the foreground (means the application waits for the expansion to finish before it proceeds) especially if many nodes create lots of files in parallel. Since the changes you now see messages on the filesystem manager in its mmfs log when a expansion happens with details including if somebody had to wait for it or not. 

 

Sven

 

From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Mladen Portak <mladen.portak at hr.ibm.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: Wednesday, March 6, 2019 at 1:49 AM
To: <gpfsug-discuss at spectrumscale.org>
Subject: [gpfsug-discuss] Question about inodes incrise

 

Dear.

is it process of increasing inodes disruptiv?

Thank You


Mladen Portak
Lab Service SEE Storage Consultant
mladen.portak at hr.ibm.com
+385 91 6308 293


IBM Hrvatska d.o.o. za proizvodnju i trgovinu
Miramarska 23, 10 000 Zagreb, Hrvatska
Upisan kod Trgovačkog suda u Zagrebu pod br. 080011422
Temeljni kapital: 788,000.00 kuna - uplaćen u cijelosti
Direktor: Željka Tičić
Žiro račun kod: RAIFFEISENBANK AUSTRIA d.d. Zagreb, Magazinska cesta 69, 10000 Zagreb, Hrvatska
IBAN: HR5424840081100396574 (SWIFT RZBHHR2X); OIB 43331467622
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190306/cc80595e/attachment.htm>


More information about the gpfsug-discuss mailing list