[gpfsug-discuss] Metadata only system pool

Buterbaugh, Kevin L Kevin.Buterbaugh at Vanderbilt.Edu
Tue Jan 23 17:37:51 GMT 2018


Hi All,

I do have metadata replication set to two, so Alex, does that make more sense?

And I had forgotten about indirect blocks for large files, which actually makes sense with the user in question … my apologies for that … due to a very gravely ill pet and a recovering at home from pneumonia family member I’m way more sleep deprived right now than I’d like.  :-(

Fred - I think you’ve already answered this … but mmchfs can only create / allocate more inodes … it cannot be used to shrink the number of inodes?  That would make sense, and if that’s the case then I can allocate more NSDs to the system pool.

Thanks…

Kevin

On Jan 23, 2018, at 11:27 AM, Alex Chekholko <alex at calicolabs.com<mailto:alex at calicolabs.com>> wrote:

2.8TB seems quite high for only 350M inodes.  Are you sure you only have metadata in there?

On Tue, Jan 23, 2018 at 9:25 AM, Frederick Stock <stockf at us.ibm.com<mailto:stockf at us.ibm.com>> wrote:
One possibility is the creation/expansion of directories or allocation of indirect blocks for large files.

Not sure if this is the issue here but at one time inode allocation was considered slow and so folks may have pre-allocated inodes to avoid that overhead during file creation.  To my understanding inode creation time is not so slow that users need to pre-allocate inodes.  Yes, there are likely some applications where pre-allocating may be necessary but I expect they would be the exception.  I mention this because you have a lot of free inodes and of course once they are allocated they cannot be de-allocated.

Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821<tel:(720)%20430-8821>
stockf at us.ibm.com<mailto:stockf at us.ibm.com>



From:        "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date:        01/23/2018 12:17 PM
Subject:        [gpfsug-discuss] Metadata only system pool
Sent by:        gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
________________________________



Hi All,

I was under the (possibly false) impression that if you have a filesystem where the system pool contains metadata only then the only thing that would cause the amount of free space in that pool to change is the creation of more inodes … is that correct?  In other words, given that I have a filesystem with 130 million free (but allocated) inodes:

Inode Information
-----------------
Number of used inodes:       218635454
Number of free inodes:       131364674
Number of allocated inodes:  350000128
Maximum number of inodes:    350000128

I would not expect that a user creating a few hundred or thousands of files could cause a “no space left on device” error (which I’ve got one user getting).  There’s plenty of free data space, BTW.

Now my system pool is almost “full”:

(pool total)           2.878T                                   34M (  0%)        140.9M ( 0%)

But again, what - outside of me creating more inodes - would cause that to change??

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu>- (615)875-9633<tel:(615)%20875-9633>


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C1607a3fe872e4241587b08d56286a746%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636523252830007825&sdata=h%2BCEzaXFYl%2By89m3IVPo960AeN2CL7idGpgCLZgOip8%3D&reserved=0>
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=p_1XEUyoJ7-VJxF_w8h9gJh8_Wj0Pey73LCLLoxodpw&m=gou0xYZwz8M-5i8mT6Tthafi8JW2aMrzQGMK1hUEUls&s=jcHOB_vmJjE8PnrpfHqzMkm1nk6QWwkn2npTEP6kcKs&e=<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss%26d%3DDwICAg%26c%3Djf_iaSHvJObTbx-siA1ZOg%26r%3Dp_1XEUyoJ7-VJxF_w8h9gJh8_Wj0Pey73LCLLoxodpw%26m%3Dgou0xYZwz8M-5i8mT6Tthafi8JW2aMrzQGMK1hUEUls%26s%3DjcHOB_vmJjE8PnrpfHqzMkm1nk6QWwkn2npTEP6kcKs%26e%3D&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C1607a3fe872e4241587b08d56286a746%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636523252830007825&sdata=%2B87gWCFaIiPJUDwTWW5KxsR11rJJB6o6EfrIgIOGPKE%3D&reserved=0>




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C1607a3fe872e4241587b08d56286a746%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636523252830007825&sdata=h%2BCEzaXFYl%2By89m3IVPo960AeN2CL7idGpgCLZgOip8%3D&reserved=0>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C1607a3fe872e4241587b08d56286a746%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636523252830007825&sdata=rIFx3lzbAIH5SZtFxJsVqWMMSo%2F0LssNc4K4tZH3uQc%3D&reserved=0>


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C1607a3fe872e4241587b08d56286a746%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636523252830007825&sdata=rIFx3lzbAIH5SZtFxJsVqWMMSo%2F0LssNc4K4tZH3uQc%3D&reserved=0

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180123/0576d9e2/attachment.htm>


More information about the gpfsug-discuss mailing list