[gpfsug-discuss] Metadata only system pool

Buterbaugh, Kevin L Kevin.Buterbaugh at Vanderbilt.Edu
Tue Jan 23 19:25:54 GMT 2018


Hi All,

This is all making sense and I appreciate everyone’s responses … and again I apologize for not thinking about the indirect blocks.

Marc - we specifically chose 4K inodes when we created this filesystem a little over a year ago so that small files could fit in the inode and therefore be stored on the metadata SSDs.

This is more of a curiosity question … is it documented somewhere how a 4K inode is used?  I understand that for very small files up to 3.5K of that can be for data, but what about for large files?  I.e., how much of that 4K is used for block addresses  (3.5K plus whatever portion was already allocated to block addresses??) … or what I’m really asking is, given 4K inodes and a 1M block size how big does a file have to be before it will need to use indirect blocks?

Thanks again…

Kevin

On Jan 23, 2018, at 1:12 PM, Marc A Kaplan <makaplan at us.ibm.com<mailto:makaplan at us.ibm.com>> wrote:

If one were starting over, it might make sense to use a  smaller inode size.  I believe we still support 512, 1K, 2K.
Tradeoff with the fact that inodes can store data and EAs.




From:        "Uwe Falke" <UWEFALKE at de.ibm.com<mailto:UWEFALKE at de.ibm.com>>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date:        01/23/2018 04:04 PM
Subject:        Re: [gpfsug-discuss] Metadata only system pool
Sent by:        gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
________________________________



rough calculation (assuming 4k inodes):
350 x 10^6 x 4096 Bytes=1.434TB=1.304TiB. With replication that uses
2.877TB or 2.308TiB
As already mentioned here, directory and indirect blocks come on top. Even
if you could get rid of a portion of the allocated and unused inodes that
metadata pool appears bit small to me.
If that is a large filesystem there should be some funding to extend it.
If you have such a many-but-small-files system as discussed recently in
this theatre, you might still beg for more MD storage but that makes than
a larger portion of the total cost (assuming data storage is on HDD and md
storage on SSD) and that again reduces your chances.




Mit freundlichen Grüßen / Kind regards


Dr. Uwe Falke

IT Specialist
High Performance Computing Services / Integrated Technology Services /
Data Center Services
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefalke at de.ibm.com<mailto:uwefalke at de.ibm.com>
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung:
Thomas Wolter, Sven Schooß
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 17122




From:   "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   01/23/2018 06:17 PM
Subject:        [gpfsug-discuss] Metadata only system pool
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi All,

I was under the (possibly false) impression that if you have a filesystem
where the system pool contains metadata only then the only thing that
would cause the amount of free space in that pool to change is the
creation of more inodes ? is that correct?  In other words, given that I
have a filesystem with 130 million free (but allocated) inodes:

Inode Information
-----------------
Number of used inodes:       218635454
Number of free inodes:       131364674
Number of allocated inodes:  350000128
Maximum number of inodes:    350000128

I would not expect that a user creating a few hundred or thousands of
files could cause a ?no space left on device? error (which I?ve got one
user getting).  There?s plenty of free data space, BTW.

Now my system pool is almost ?full?:

(pool total)           2.878T                                   34M (  0%)
      140.9M ( 0%)

But again, what - outside of me creating more inodes - would cause that to
change??

Thanks?

Kevin

?
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and
Education
Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=8WgQlUnhzFycOZf-YvqYpdUkCiyEdRvWukE-KKRuFbE&s=aCywbK-1heVHPR8Fg74z9VxkGbNfCxMdtEKIDMWVIwI&e=<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss%26d%3DDwIFAw%26c%3Djf_iaSHvJObTbx-siA1ZOg%26r%3DcvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8%26m%3D8WgQlUnhzFycOZf-YvqYpdUkCiyEdRvWukE-KKRuFbE%26s%3DaCywbK-1heVHPR8Fg74z9VxkGbNfCxMdtEKIDMWVIwI%26e%3D&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C77fefde14ec54e04b35708d5629550bd%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636523315820548341&sdata=3KmA%2BL3EVMZYzDCK9sb%2FPjwi8UWcFg7tjUVHbIpaTMM%3D&reserved=0>




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=8WgQlUnhzFycOZf-YvqYpdUkCiyEdRvWukE-KKRuFbE&s=aCywbK-1heVHPR8Fg74z9VxkGbNfCxMdtEKIDMWVIwI&e=<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss%26d%3DDwIFAw%26c%3Djf_iaSHvJObTbx-siA1ZOg%26r%3DcvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8%26m%3D8WgQlUnhzFycOZf-YvqYpdUkCiyEdRvWukE-KKRuFbE%26s%3DaCywbK-1heVHPR8Fg74z9VxkGbNfCxMdtEKIDMWVIwI%26e%3D&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C77fefde14ec54e04b35708d5629550bd%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636523315820548341&sdata=3KmA%2BL3EVMZYzDCK9sb%2FPjwi8UWcFg7tjUVHbIpaTMM%3D&reserved=0>




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C77fefde14ec54e04b35708d5629550bd%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636523315820548341&sdata=rbazh5e%2BxgHGvgF65VHTs9Hf4kk9EtUizsb19l5rr7U%3D&reserved=0

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180123/c9257b3d/attachment.htm>


More information about the gpfsug-discuss mailing list