[gpfsug-discuss] pool-metadata_high_error

Markus Rohwedder rohwedder at de.ibm.com
Mon May 14 12:50:49 BST 2018


Hi,

The GUI behavior is correct.
You can reduce the maximum number of inodes of an inode space, but not
below the allocated inodes level.

See below:

# Setting inode levels to 300000 max/ 200000 preallocated
[root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 300000:200000
Set maxInodes for inode space 0 to 300000
Fileset root changed.

# The actually allocated values may be sloightly different:
[root at cache-11 ~]# mmlsfileset gpfs0 -L
Filesets in file system 'gpfs0':
Name                            Id      RootInode  ParentId Created
InodeSpace      MaxInodes    AllocInodes Comment
root                             0              3        -- Mon Feb 26
11:34:06 2018        0               300000         200032 root fileset

# Lowering the allocated values is not allowed
[root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 300000:150000
The number of inodes to preallocate cannot be lower than the 200032 inodes
already allocated.
Input parameter value for inode limit out of range.
mmchfileset: Command failed. Examine previous error messages to determine
cause.

# However, you can change the max inodes up to the allocated value
[root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 200032:200032
Set maxInodes for inode space 0 to 200032
Fileset root changed.

[root at cache-11 ~]# mmlsfileset gpfs0 -L
Filesets in file system 'gpfs0':
Name                            Id      RootInode  ParentId Created
InodeSpace      MaxInodes    AllocInodes Comment
root                             0              3        -- Mon Feb 26
11:34:06 2018        0               200032         200032 root fileset


Mit freundlichen Grüßen / Kind regards

Dr. Markus Rohwedder

Spectrum Scale GUI Development
                                                                                   
                                                                                   
                                                                                   
                                                                                   
                                                                                   
 Phone:  +49 7034 6430190      IBM Deutschland Research &                          
                              Development                                          
                                                                                   
 E-Mail: rohwedder at de.ibm.com  Am Weiher 24                                        
                                                                                   
                               65451 Kelsterbach                                   
                                                                                   
                               Germany                                             
                                                                                   
                                                                                   
                                                                                   
                                                                                   
                                                                                   





From:	KG <spectrumscale at kiranghag.com>
To:	gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:	14.05.2018 13:37
Subject:	Re: [gpfsug-discuss] pool-metadata_high_error
Sent by:	gpfsug-discuss-bounces at spectrumscale.org









On Mon, May 14, 2018 at 4:48 PM, Markus Rohwedder <rohwedder at de.ibm.com>
wrote:


      Once inodes are allocated I am not aware of a method to de-allocate
      them. This is what the Knowledge Center says:

      "Inodes are allocated when they are used. When a file is deleted, the
      inode is reused, but inodes are never deallocated. When setting the
      maximum number of inodes in a file system, there is the option to
      preallocate inodes. However, in most cases there is no need to
      preallocate inodes because, by default, inodes are allocated in sets
      as needed. If you do decide to preallocate inodes, be careful not to
      preallocate more inodes than will be used; otherwise, the allocated
      inodes will unnecessarily consume metadata space that cannot be
      reclaimed. "






I believe the Maximum number of inodes cannot be reduced but allocated
number of inodes can be. Not sure why the GUI isnt allowing to reduce it.
​



      From: KG <spectrumscale at kiranghag.com>
      To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
      Date: 14.05.2018 12:57
      Subject: [gpfsug-discuss] pool-metadata_high_error
      Sent by: gpfsug-discuss-bounces at spectrumscale.org




      Hi Folks

      IHAC who is reporting pool-metadata_high_error on GUI.

      The inode utilisation on filesystem is as below
      Used inodes - 92922895
      free inodes - 1684812529
      allocated - 1777735424
      max inodes - 1911363520

      the inode utilization on one fileset (it is only one being used) is
      below
      Used inodes - 93252664
      allocated - 1776624128
      max inodes 1876624064

      is this because the difference in allocated and max inodes is very
      less?

      Customer tried reducing allocated inodes on fileset (between max and
      used inode) and GUI complains that it is out of range.
      _______________________________________________
      gpfsug-discuss mailing list
      gpfsug-discuss at spectrumscale.org
      http://gpfsug.org/mailman/listinfo/gpfsug-discuss






      _______________________________________________
      gpfsug-discuss mailing list
      gpfsug-discuss at spectrumscale.org
      http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/c2cfae31/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ecblank.gif
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/c2cfae31/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 18426749.gif
Type: image/gif
Size: 4659 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/c2cfae31/attachment-0001.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/c2cfae31/attachment-0002.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 18361734.gif
Type: image/gif
Size: 26124 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/c2cfae31/attachment-0003.gif>


More information about the gpfsug-discuss mailing list