[gpfsug-discuss] Metadata only system pool

Uwe Falke UWEFALKE at de.ibm.com
Tue Jan 23 19:03:40 GMT 2018


rough calculation (assuming 4k inodes): 
350 x 10^6 x 4096 Bytes=1.434TB=1.304TiB. With replication that uses 
2.877TB or 2.308TiB 
As already mentioned here, directory and indirect blocks come on top. Even 
if you could get rid of a portion of the allocated and unused inodes that 
metadata pool appears bit small to me. 
If that is a large filesystem there should be some funding to extend it. 
If you have such a many-but-small-files system as discussed recently in 
this theatre, you might still beg for more MD storage but that makes than 
a larger portion of the total cost (assuming data storage is on HDD and md 
storage on SSD) and that again reduces your chances. 



 
Mit freundlichen Grüßen / Kind regards

 
Dr. Uwe Falke
 
IT Specialist
High Performance Computing Services / Integrated Technology Services / 
Data Center Services
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefalke at de.ibm.com
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: 
Thomas Wolter, Sven Schooß
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, 
HRB 17122 




From:   "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   01/23/2018 06:17 PM
Subject:        [gpfsug-discuss] Metadata only system pool
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi All, 

I was under the (possibly false) impression that if you have a filesystem 
where the system pool contains metadata only then the only thing that 
would cause the amount of free space in that pool to change is the 
creation of more inodes ? is that correct?  In other words, given that I 
have a filesystem with 130 million free (but allocated) inodes:

Inode Information
-----------------
Number of used inodes:       218635454
Number of free inodes:       131364674
Number of allocated inodes:  350000128
Maximum number of inodes:    350000128

I would not expect that a user creating a few hundred or thousands of 
files could cause a ?no space left on device? error (which I?ve got one 
user getting).  There?s plenty of free data space, BTW.

Now my system pool is almost ?full?:

(pool total)           2.878T                                   34M (  0%) 
       140.9M ( 0%)

But again, what - outside of me creating more inodes - would cause that to 
change??

Thanks?

Kevin

?
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and 
Education
Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss







More information about the gpfsug-discuss mailing list